Science.gov

Sample records for anatomy segmentation algorithm

  1. Anatomy packing with hierarchical segments: an algorithm for segmentation of pulmonary nodules in CT images.

    PubMed

    Tsou, Chi-Hsuan; Lor, Kuo-Lung; Chang, Yeun-Chung; Chen, Chung-Ming

    2015-05-14

    This paper proposes a semantic segmentation algorithm that provides the spatial distribution patterns of pulmonary ground-glass nodules with solid portions in computed tomography (CT) images. The proposed segmentation algorithm, anatomy packing with hierarchical segments (APHS), performs pulmonary nodule segmentation and quantification in CT images. In particular, the APHS algorithm consists of two essential processes: hierarchical segmentation tree construction and anatomy packing. It constructs the hierarchical segmentation tree based on region attributes and local contour cues along the region boundaries. Each node of the tree corresponds to the soft boundary associated with a family of nested segmentations through different scales applied by a hierarchical segmentation operator that is used to decompose the image in a structurally coherent manner. The anatomy packing process detects and localizes individual object instances by optimizing a hierarchical conditional random field model. Ninety-two histopathologically confirmed pulmonary nodules were used to evaluate the performance of the proposed APHS algorithm. Further, a comparative study was conducted with two conventional multi-label image segmentation algorithms based on four assessment metrics: the modified Williams index, percentage statistic, overlapping ratio, and difference ratio. Under the same framework, the proposed APHS algorithm was applied to two clinical applications: multi-label segmentation of nodules with a solid portion and surrounding tissues and pulmonary nodule segmentation. The results obtained indicate that the APHS-generated boundaries are comparable to manual delineations with a modified Williams index of 1.013. Further, the resulting segmentation of the APHS algorithm is also better than that achieved by two conventional multi-label image segmentation algorithms. The proposed two-level hierarchical segmentation algorithm effectively labelled the pulmonary nodule and its surrounding

  2. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms: VISCERAL Anatomy Benchmarks.

    PubMed

    Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan

    2016-11-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.

  3. An artifact-robust, shape library-based algorithm for automatic segmentation of inner ear anatomy in post-cochlear-implantation CT.

    PubMed

    Reda, Fitsum A; Noble, Jack H; Labadie, Robert F; Dawant, Benoit M

    2014-03-21

    A cochlear implant (CI) is a device that restores hearing using an electrode array that is surgically placed in the cochlea. After implantation, the CI is programmed to attempt to optimize hearing outcome. Currently, we are testing an image-guided CI programming (IGCIP) technique we recently developed that relies on knowledge of relative position of intracochlear anatomy to implanted electrodes. IGCIP is enabled by a number of algorithms we developed that permit determining the positions of electrodes relative to intra-cochlear anatomy using a pre- and a post-implantation CT. One issue with this technique is that it cannot be used for many subjects for whom a pre-implantation CT was not acquired. Pre-implantation CT has been necessary because it is difficult to localize the intra-cochlear structures in post-implantation CTs alone due to the image artifacts that obscure the cochlea. In this work, we present an algorithm for automatically segmenting intra-cochlear anatomy in post-implantation CTs. Our approach is to first identify the labyrinth and then use its position as a landmark to localize the intra-cochlea anatomy. Specifically, we identify the labyrinth by first approximately estimating its position by mapping a labyrinth surface of another subject that is selected from a library of such surfaces and then refining this estimate by a standard shape model-based segmentation method. We tested our approach on 10 ears and achieved overall mean and maximum errors of 0.209 and 0.98 mm, respectively. This result suggests that our approach is accurate enough for developing IGCIP strategies based solely on post-implantation CTs.

  4. An artifact-robust, shape library-based algorithm for automatic segmentation of inner ear anatomy in post-cochlear-implantation CT

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Noble, Jack H.; Labadie, Robert F.; Dawant, Benoit M.

    2014-03-01

    A cochlear implant (CI) is a device that restores hearing using an electrode array that is surgically placed in the cochlea. After implantation, the CI is programmed to attempt to optimize hearing outcome. Currently, we are testing an imageguided CI programming (IGCIP) technique we recently developed that relies on knowledge of relative position of intracochlear anatomy to implanted electrodes. IGCIP is enabled by a number of algorithms we developed that permit determining the positions of electrodes relative to intra-cochlear anatomy using a pre- and a post-implantation CT. One issue with this technique is that it cannot be used for many subjects for whom a pre-implantation CT was not acquired. Pre-implantation CT has been necessary because it is difficult to localize the intra-cochlear structures in post-implantation CTs alone due to the image artifacts that obscure the cochlea. In this work, we present an algorithm for automatically segmenting intra-cochlear anatomy in post-implantation CTs. Our approach is to first identify the labyrinth and then use its position as a landmark to localize the intra-cochlea anatomy. Specifically, we identify the labyrinth by first approximately estimating its position by mapping a labyrinth surface of another subject that is selected from a library of such surfaces and then refining this estimate by a standard shape model-based segmentation method. We tested our approach on 10 ears and achieved overall mean and maximum errors of 0.209 and 0.98 mm, respectively. This result suggests that our approach is accurate enough for developing IGCIP strategies based solely on post-implantation CTs.

  5. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  6. Performance Evaluation of Automatic Anatomy Segmentation Algorithm on Repeat or Four-Dimensional Computed Tomography Images Using Deformable Image Registration Method

    SciTech Connect

    Wang He; Garden, Adam S.; Zhang Lifei; Wei Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O'Daniel, Jennifer; Zhang Yongbin; Mohan, Radhe; Dong Lei

    2008-09-01

    Purpose: Auto-propagation of anatomic regions of interest from the planning computed tomography (CT) scan to the daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Methods and Materials: We had previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In the present study, the regions of interest delineated on the planning CT image were mapped onto daily CT or four-dimensional CT images using the same transformation. Postprocessing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for 8 head-and-neck cancer patients with a total of 100 repeat CT scans, 1 prostate patient with 24 repeat CT scans, and 9 lung cancer patients with a total of 90 four-dimensional CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume overlap index and mean absolute surface-to-surface distance. Results: The deformed contours were reasonably well matched with the daily anatomy on the repeat CT images. The volume overlap index and mean absolute surface-to-surface distance was 83% and 1.3 mm, respectively, compared with the independently drawn contours. Better agreement (>97% and <0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was also robust in the presence of random noise in the image. Conclusion: The deformable algorithm might be an effective method to propagate the planning regions of interest to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended.

  7. Performance evaluation of an automatic anatomy segmentation algorithm on repeat or four-dimensional CT images using a deformable image registration method

    PubMed Central

    Wang, He; Garden, Adam S.; Zhang, Lifei; Wei, Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O’Daniel, Jennifer; Zhang, Yongbin; Mohan, Radhe; Dong, Lei

    2008-01-01

    Purpose Auto-propagation of anatomical region-of-interests (ROIs) from the planning CT to daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Method and Materials We previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In this study, the ROIs delineated on the planning CT image were mapped onto daily CT or four-dimentional (4D) CT images using the same transformation. Post-processing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for eight head-and-neck patients with a total of 100 repeat CTs, one prostate patient with 24 repeat CTs, and nine lung cancer patients with a total of 90 4D-CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume-overlap-index (VOI) and mean absolute surface-to-surface distance (ASSD). Results The deformed contours were reasonably well matched with daily anatomy on repeat CT images. The VOI and mean ASSD were 83% and 1.3 mm when compared to the independently drawn contours. A better agreement (greater than 97% and less than 0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was robust in the presence of random noise in the image. Conclusion The deformable algorithm may be an effective method to propagate the planning ROIs to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended. PMID:18722272

  8. Automatic and exam-type independent algorithm for the segmentation and extraction of foreground, background, and anatomy regions in digital radiographic images

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Luo, Hui

    2004-05-01

    Processing optimization of digital radiographs requires the knowledge of the location and characteristics of both diagnostically relevant and irrelevant image regions. An algorithm has been developed that can automatically detect and extract foreground, background, and anatomy regions from a digital radiograph. This algorithm is independent of exam-type information and can deal with multiple-exposed computed radiography (CR) images. First, the image is subsampled, and the processing is done on the sub-sampled image to improve subsequent processing efficiency and reduce algorithm dependency on image noise and detector characteristics. Second, an initial background is detected using adaptive thresholding on the cumulative histogram of significant transition pixels and an iterative process, based on background variance. Third, foreground detection is conducted by: (1) classifying all significant transitions using a smart-edge detection, (2) delineating all lines that are possible collimation blades using Hough transform, (3) finding candidate partition blade pairs if the image has several radiation fields, (4) partitioning the image into sub-images containing only one radiation field using a divide-and-conquer process, and (5) identifying the best collimation for each sub-image from a tree-structured hypothesis list. Fourth, the background is regenerated using a region-growing process from identified background "seeds." Fifth, the background and foreground regions are merged and removed; the rest of the image is labeled and those large, connected regions are identified as anatomy regions. The algorithm has been trained and tested separately with two image sets from a wide variety of exam types. Each set consists of more than 2700 CR images acquired with KODAK DIRECTVIEW CR 800 Systems. The overall success rate in detecting both foreground and background is 97%.

  9. Evaluation of Software Tools for Segmentation of Temporal Bone Anatomy.

    PubMed

    Hassan, Kowther; Dort, Joseph C; Sutherland, Garnette R; Chan, Sonny

    2016-01-01

    Surgeons are increasingly relying on 3D medical image data for planning interventions. Virtual 3D models of intricate anatomy, such as that found within the temporal bone, have proven useful for surgical education, planning, and rehearsal, but such applications require segmentation of surgically relevant structures in the image data. Four publicly available software packages, ITK-SNAP, MITK, 3D Slicer, and Seg3D, were evaluated for their efficacy in segmenting temporal bone anatomy from CT and MR images to support patient-specific surgery simulation. No single application provided efficient means to segment every structure, but a combination of the tools evaluated enables creation of a complete virtual temporal bone model from raw image data with reasonably minimal effort.

  10. Automatic segmentation of intra-cochlear anatomy in post-implantation CT

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Dawant, Benoit M.; McRackan, Theodore R.; Labadie, Robert F.; Noble, Jack H.

    2013-03-01

    A cochlear implant (CI) is a neural prosthetic device that restores hearing by directly stimulating the auditory nerve with an electrode array. In CI surgery, the surgeon threads the electrode array into the cochlea, blind to internal structures. We have recently developed algorithms for determining the position of CI electrodes relative to intra-cochlear anatomy using pre- and post-implantation CT. We are currently using this approach to develop a CI programming assistance system that uses knowledge of electrode position to determine a patient-customized CI sound processing strategy. However, this approach cannot be used for the majority of CI users because the cochlea is obscured by image artifacts produced by CI electrodes and acquisition of pre-implantation CT is not universal. In this study we propose an approach that extends our techniques so that intra-cochlear anatomy can be segmented for CI users for which pre-implantation CT was not acquired. The approach achieves automatic segmentation of intra-cochlear anatomy in post-implantation CT by exploiting intra-subject symmetry in cochlear anatomy across ears. We validated our approach on a dataset of 10 ears in which both pre- and post-implantation CTs were available. Our approach results in mean and maximum segmentation errors of 0.27 and 0.62 mm, respectively. This result suggests that our automatic segmentation approach is accurate enough for developing customized CI sound processing strategies for unilateral CI patients based solely on postimplantation CT scans.

  11. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    PubMed

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  12. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  13. Measuring the success of video segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Power, Gregory J.

    2001-12-01

    Appropriate segmentation of video is a key step for applications such as video surveillance, video composing, video compression, storage and retrieval, and automated target recognition. Video segmentation algorithms involve dissecting the video into scenes based on shot boundaries as well as local objects and events based on spatial shape and regional motions. Many algorithmic approaches to video segmentation have been recently reported, but many lack measures to quantify the success of the segmentation especially in comparison to other algorithms. This paper suggests multiple bench-top measures for evaluating video segmentation. The paper suggests that the measures are most useful when 'truth' data about the video is available such as precise frame-by- frame object shape. When precise 'truth' data is unavailable, this paper suggests using hand-segmented 'truth' data to measure the success of the video segmentation. Thereby, the ability of the video segmentation algorithm to achieve the same quality of segmentation as the human is obtained in the form of a variance in multiple measures. The paper introduces a suite of measures, each scaled from zero to one. A score of one on a particular measure is a perfect score for a singular segmentation measure. Measures are introduced to evaluate the ability of a segmentation algorithm to correctly detect shot boundaries, to correctly determine spatial shape and to correctly determine temporal shape. The usefulness of the measures are demonstrated on a simple segmenter designed to detect and segment a ping pong ball from a table tennis image sequence.

  14. Spectral clustering algorithms for ultrasound image segmentation.

    PubMed

    Archip, Neculai; Rohling, Robert; Cooperberg, Peter; Tahmasebpour, Hamid; Warfield, Simon K

    2005-01-01

    Image segmentation algorithms derived from spectral clustering analysis rely on the eigenvectors of the Laplacian of a weighted graph obtained from the image. The NCut criterion was previously used for image segmentation in supervised manner. We derive a new strategy for unsupervised image segmentation. This article describes an initial investigation to determine the suitability of such segmentation techniques for ultrasound images. The extension of the NCut technique to the unsupervised clustering is first described. The novel segmentation algorithm is then performed on simulated ultrasound images. Tests are also performed on abdominal and fetal images with the segmentation results compared to manual segmentation. Comparisons with the classical NCut algorithm are also presented. Finally, segmentation results on other types of medical images are shown.

  15. Endoscopic ultrasound description of liver segmentation and anatomy.

    PubMed

    Bhatia, Vikram; Hijioka, Susumu; Hara, Kazuo; Mizuno, Nobumasa; Imaoka, Hiroshi; Yamao, Kenji

    2014-05-01

    Endoscopic ultrasound (EUS) can demonstrate the detailed anatomy of the liver from the transgastric and transduodenal routes. Most of the liver segments can be imaged with EUS, except the right posterior segments. The intrahepatic vascular landmarks include the major hepatic veins, portal vein radicals, hepatic arterial branches, and the inferior vena cava, and the venosum and teres ligaments are other important intrahepatic landmarks. The liver hilum and gallbladder serve as useful surface landmarks. Deciphering liver segmentation and anatomy by EUS requires orienting the scan planes with these landmarkstructures, and is different from the static cross-sectional radiological images. Orientation during EUS requires appreciation of the numerous scan planes possible in real-time, and the direction of scanning from the stomach and duodenal bulb. We describe EUS imaging of the liver with a curved linear probe in a step-by-step approach, with the relevant anatomical details, potential applications, and pitfalls of this novel EUS application. © 2013 The Authors. Digestive Endoscopy © 2013 Japan Gastroenterological Endoscopy Society.

  16. Segmental anatomy of the liver: poor correlation with CT.

    PubMed

    Fasel, J H; Selle, D; Evertsz, C J; Terrier, F; Peitgen, H O; Gailloud, P

    1998-01-01

    To evaluate qualitatively and quantitatively the current procedures for radiologic delineation of the segmental and subsegmental anatomy of the liver. Vascular casts of 10 livers were examined with helical computed tomography (CT). Liver segmental and subsegmental anatomy were determined on the CT scans according to customary radiologic practice guidelines. CT anatomic findings were compared with authentic anatomic territories seen at anatomic examination. The differences were assessed quantitatively in five of the 10 livers. For the marginal (cranial and caudal) portions of the liver, an average (+/- 1 standard deviation) of 17.3% +/- 6.5 of the hepatic area visualized on axial CT scans was attributed to an incorrect subsegment. For the central zones (those adjacent to the right and left branches of the portal vein), this error amounted to 51.6% +/- 19.9. Expressed in absolute numbers, the error amounted to 40 mm on axial CT scans. The radiologic determination of portal venous territories within the liver must be revised. The indirect landmarks currently used are not reliable for proper delineation. Only procedures that account for the portal venous distribution pattern, including peripheral branches, will result in correct depiction of the complex and variable anatomic reality.

  17. Algorithmic evaluation of lower jawbone segmentations

    NASA Astrophysics Data System (ADS)

    Egger, Jan; Hochegger, Kerstin; Gall, Markus; Chen, Xiaojun; Reinbacher, Knut; Schwenzer-Zimmerer, Katja; Schmalstieg, Dieter; Wallner, Jürgen

    2017-03-01

    The lower jawbone (or mandible), is due to its exposure to complex biomechanical forces the largest and strongest facial bone in humans. In this publication, an algorithmic evaluation of lower jawbone segmentation with a cellular automata algorithm called GrowCut is presented. For an evaluation, the algorithmic segmentation results were compared with slice-by-slice segmentations from two specialized physicians, which is considered to assess the given ground truth. As a result, pure manual slice-by-slice outlining took on average 39 minutes (minimum 35 minutes and maximum 46 minutes). This stands in strong contrast to an algorithmic segmentation which needed only about one minute for an initialization, hence needing just a fraction of the manual contouring time. At the same time, the algorithmic segmentations could achieve an acceptable Dice Similarity Score (DSC) of nearly ninety percent when compared to the ground truth slice-by-slice segmentations generated by the physicians. This stands in direct comparison to somewhat above ninety percent Dice Score between the two manual segmentations of the jawbones. In summary, this contribution shows that an algorithmic GrowCut segmentation can be an alternative to the very time consuming manual slice-by-slice outlining in the clinical practice.

  18. A segmentation algorithm for noisy images

    SciTech Connect

    Xu, Y.; Olman, V.; Uberbacher, E.C.

    1996-12-31

    This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.

  19. Robust Atlas-Based Segmentation of Highly Variable Anatomy: Left Atrium Segmentation.

    PubMed

    Depa, Michal; Sabuncu, Mert R; Holmvang, Godtfred; Nezafat, Reza; Schmidt, Ehud J; Golland, Polina

    Automatic segmentation of the heart's left atrium offers great benefits for planning and outcome evaluation of atrial ablation procedures. However, the high anatomical variability of the left atrium presents significant challenges for atlas-guided segmentation. In this paper, we demonstrate an automatic method for left atrium segmentation using weighted voting label fusion and a variant of the demons registration algorithm adapted to handle images with different intensity distributions. We achieve accurate automatic segmentation that is robust to the high anatomical variations in the shape of the left atrium in a clinical dataset of MRA images.

  20. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  1. Heart region segmentation from low-dose CT scans: an anatomy based approach

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Biancardi, Alberto M.; Yankelevitz, David F.; Cham, Matthew D.; Henschke, Claudia I.

    2012-02-01

    Cardiovascular disease is a leading cause of death in developed countries. The concurrent detection of heart diseases during low-dose whole-lung CT scans (LDCT), typically performed as part of a screening protocol, hinges on the accurate quantification of coronary calcification. The creation of fully automated methods is ideal as complete manual evaluation is imprecise, operator dependent, time consuming and thus costly. The technical challenges posed by LDCT scans in this context are mainly twofold. First, there is a high level image noise arising from the low radiation dose technique. Additionally, there is a variable amount of cardiac motion blurring due to the lack of electrocardiographic gating and the fact that heart rates differ between human subjects. As a consequence, the reliable segmentation of the heart, the first stage toward the implementation of morphologic heart abnormality detection, is also quite challenging. An automated computer method based on a sequential labeling of major organs and determination of anatomical landmarks has been evaluated on a public database of LDCT images. The novel algorithm builds from a robust segmentation of the bones and airways and embodies a stepwise refinement starting at the top of the lungs where image noise is at its lowest and where the carina provides a good calibration landmark. The segmentation is completed at the inferior wall of the heart where extensive image noise is accommodated. This method is based on the geometry of human anatomy and does not involve training through manual markings. Using visual inspection by an expert reader as a gold standard, the algorithm achieved successful heart and major vessel segmentation in 42 of 45 low-dose CT images. In the 3 remaining cases, the cardiac base was over segmented due to incorrect hemidiaphragm localization.

  2. A Survey of Digital Image Segmentation Algorithms

    DTIC Science & Technology

    1995-01-01

    features. Thresholding techniques arc also useful in segmenting such binary images as printed documents, line drawings, and multispectral and x-ray...algorithms, pixel labeling and run-length connectivity analysis, arc discussed in the following sections. Therefore, in exammmg g(x, y), pixels that are...edge linking, graph searching, curve fitting, Hough transform, and others arc applicablc to image segmematio~. Difficulties with boundary-based methods

  3. Improving Brain Magnetic Resonance Image (MRI) Segmentation via a Novel Algorithm based on Genetic and Regional Growth

    PubMed Central

    A., Javadpour; A., Mohammadi

    2016-01-01

    Background Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. Objective This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Methods Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer’s disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. Results By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. Conclusion The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases. PMID:27672629

  4. Improving Brain Magnetic Resonance Image (MRI) Segmentation via a Novel Algorithm based on Genetic and Regional Growth.

    PubMed

    A, Javadpour; A, Mohammadi

    2016-06-01

    Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer's disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases.

  5. A Review of Coronary Vessel Segmentation Algorithms

    PubMed Central

    Dehkordi, Maryam Taghizadeh; Sadri, Saeed; Doosthoseini, Alimohamad

    2011-01-01

    Coronary heart disease has been one of the main threats to human health. Coronary angiography is taken as the gold standard; for the assessment of coronary artery disease. However, sometimes, the images are difficult to visually interpret because of the crossing and overlapping of vessels in the angiogram. Vessel extraction from X-ray angiograms has been a challenging problem for several years. There are several problems in the extraction of vessels, including: weak contrast between the coronary arteries and the background, unknown and easily deformable shape of the vessel tree, and strong overlapping shadows of the bones. In this article we investigate the coronary vessel extraction and enhancement techniques, and present capabilities of the most important algorithms concerning coronary vessel segmentation. PMID:22606658

  6. New segmentation algorithm for detecting tiny objects

    NASA Astrophysics Data System (ADS)

    Sun, Han; Yang, Jingyu; Ren, Mingwu; Gao, Jian-zhen

    2001-09-01

    Road cracks in the highway surface are very dangerous to traffic. They should be found and repaired as early as possible. So we designed the system of auto detecting cracks in the highway surface. In this system, there are several key steps. For instance, the first step, image recording should use high quality photography device because of the high speed. In addition, the original data is very large, so it needs huge storage media and some effective compress processing. As the illumination is affected by environment greatly, it is essential to do some preprocessing first, such as image reconstruction and enhancement. Because the cracks are too tiny to detect, segmentation is rather difficult. This paper here proposed a new segmentation method to detect such tiny cracks, even 2mm-width ones. In this algorithm, we first do edge detecting to get seeds for line growing in the following. Then delete the false ones and get the information of cracks. It is accurate and fast enough.

  7. Segmentation of color images using genetic algorithm with image histogram

    NASA Astrophysics Data System (ADS)

    Sneha Latha, P.; Kumar, Pawan; Kahu, Samruddhi; Bhurchandi, Kishor M.

    2015-02-01

    This paper proposes a family of color image segmentation algorithms using genetic approach and color similarity threshold in terns of Just noticeable difference. Instead of segmenting and then optimizing, the proposed technique directly uses GA for optimized segmentation of color images. Application of GA on larger size color images is computationally heavy so they are applied on 4D-color image histogram table. The performance of the proposed algorithms is benchmarked on BSD dataset with color histogram based segmentation and Fuzzy C-means Algorithm using Probabilistic Rand Index (PRI). The proposed algorithms yield better analytical and visual results.

  8. Segmentation precision of abdominal anatomy for MRI-based radiotherapy.

    PubMed

    Noel, Camille E; Zhu, Fan; Lee, Andrew Y; Yanle, Hu; Parikh, Parag J

    2014-01-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC(intraobserver) = 0.89 ± 0.12, HD(intraobserver) = 3.6mm ± 1.5, DC(interobserver) = 0.89 ± 0.15, and HD(interobserver) = 3.2mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy.

  9. Segmentation precision of abdominal anatomy for MRI-based radiotherapy

    SciTech Connect

    Noel, Camille E.; Zhu, Fan; Lee, Andrew Y.; Yanle, Hu; Parikh, Parag J.

    2014-10-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC{sub intraobserver} = 0.89 ± 0.12, HD{sub intraobserver} = 3.6 mm ± 1.5, DC{sub interobserver} = 0.89 ± 0.15, and HD{sub interobserver} = 3.2 mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4 mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy.

  10. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  11. Image Segmentation Based on Chaos Immune Clone Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Cheng, Junna; Ji, Guangrong; Feng, Chen

    Image segmentation is a fundamental step in image processing. Otsu's threshold method is a widely used method for image segmentation. In this paper, a novel image segmentation method based on chaos immune clone selection algorithm (CICSA) and Otus's threshold method is presented. By introducing the chaos optimization algorithm into the parallel and distributed search mechanism of immune clone selection algorithm, CICSA takes advantage of global and local search ability. The experimental results demonstrate that the performance of CICSA on application of image segmentation has the characteristic of stability and efficiency.

  12. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  13. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  14. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  15. Comparative testing of DNA segmentation algorithms using benchmark simulations.

    PubMed

    Elhaik, Eran; Graur, Dan; Josic, Kresimir

    2010-05-01

    Numerous segmentation methods for the detection of compositionally homogeneous domains within genomic sequences have been proposed. Unfortunately, these methods yield inconsistent results. Here, we present a benchmark consisting of two sets of simulated genomic sequences for testing the performances of segmentation algorithms. Sequences in the first set are composed of fixed-sized homogeneous domains, distinct in their between-domain guanine and cytosine (GC) content variability. The sequences in the second set are composed of a mosaic of many short domains and a few long ones, distinguished by sharp GC content boundaries between neighboring domains. We use these sets to test the performance of seven segmentation algorithms in the literature. Our results show that recursive segmentation algorithms based on the Jensen-Shannon divergence outperform all other algorithms. However, even these algorithms perform poorly in certain instances because of the arbitrary choice of a segmentation-stopping criterion.

  16. CT segmentation of dental shapes by anatomy-driven reformation imaging and B-spline modelling.

    PubMed

    Barone, S; Paoli, A; Razionale, A V

    2016-06-01

    Dedicated imaging methods are among the most important tools of modern computer-aided medical applications. In the last few years, cone beam computed tomography (CBCT) has gained popularity in digital dentistry for 3D imaging of jawbones and teeth. However, the anatomy of a maxillofacial region complicates the assessment of tooth geometry and anatomical location when using standard orthogonal views of the CT data set. In particular, a tooth is defined by a sub-region, which cannot be easily separated from surrounding tissues by only considering pixel grey-intensity values. For this reason, an image enhancement is usually necessary in order to properly segment tooth geometries. In this paper, an anatomy-driven methodology to reconstruct individual 3D tooth anatomies by processing CBCT data is presented. The main concept is to generate a small set of multi-planar reformation images along significant views for each target tooth, driven by the individual anatomical geometry of a specific patient. The reformation images greatly enhance the clearness of the target tooth contours. A set of meaningful 2D tooth contours is extracted and used to automatically model the overall 3D tooth shape through a B-spline representation. The effectiveness of the methodology has been verified by comparing some anatomy-driven reconstructions of anterior and premolar teeth with those obtained by using standard tooth segmentation tools. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Brain MR image segmentation improved algorithm based on probability

    NASA Astrophysics Data System (ADS)

    Liao, Hengxu; Liu, Gang; Guo, Xiantang

    2017-08-01

    Local weight voting algorithm is a kind of current mainstream segmentation algorithm. It takes full account of the influences of the likelihood of image likelihood and the prior probabilities of labels on the segmentation results. But this method still can be improved since the essence of this method is to get the label with the maximum probability. If the probability of a label is 70%, it may be acceptable in mathematics. But in the actual segmentation, it may be wrong. So we use the matrix completion algorithm as a supplement. When the probability of the former is larger, the result of the former algorithm is adopted. When the probability of the later is larger, the result of the later algorithm is adopted. This is equivalent to adding an automatic algorithm selection switch that can theoretically ensure that the accuracy of the algorithm we propose is superior to the local weight voting algorithm. At the same time, we propose an improved matrix completion algorithm based on enumeration method. In addition, this paper also uses a multi-parameter registration model to reduce the influence that the registration made on the segmentation. The experimental results show that the accuracy of the algorithm is better than the common segmentation algorithm.

  18. Automatic lobar segmentation for diseased lungs using an anatomy-based priority knowledge in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang Joon; Kim, Jung Im; Goo, Jin Mo; Lee, Doohee

    2014-03-01

    Lung lobar segmentation in CT images is a challenging tasks because of the limitations in image quality inherent to CT image acquisition, especially low-dose CT for clinical routine environment. Besides, complex anatomy and abnormal lesions in the lung parenchyma makes segmentation difficult because contrast in CT images are determined by the differential absorption of X-rays by neighboring structures, such as tissue, vessel or several pathological conditions. Thus, we attempted to develop a robust segmentation technique for normal and diseased lung parenchyma. The images were obtained with low-dose chest CT using soft reconstruction kernel (Sensation 16, Siemens, Germany). Our PC-based in-house software segmented bronchial trees and lungs with intensity adaptive region-growing technique. Then the horizontal and oblique fissures were detected by using eigenvalues-ratio of the Hessian matrix in the lung regions which were excluded from airways and vessels. To enhance and recover the faithful 3-D fissure plane, our proposed fissure enhancing scheme were applied to the images. After finishing above steps, for careful smoothening of fissure planes, 3-D rolling-ball algorithm in xyz planes were performed. Results show that success rate of our proposed scheme was achieved up to 89.5% in the diseased lung parenchyma.

  19. An improved FCM medical image segmentation algorithm based on MMTD.

    PubMed

    Zhou, Ningning; Yang, Tingting; Zhang, Shaobai

    2014-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD) and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  20. The PCNN adaptive segmentation algorithm based on visual perception

    NASA Astrophysics Data System (ADS)

    Zhao, Yanming

    To solve network adaptive parameter determination problem of the pulse coupled neural network (PCNN), and improve the image segmentation results in image segmentation. The PCNN adaptive segmentation algorithm based on visual perception of information is proposed. Based on the image information of visual perception and Gabor mathematical model of Optic nerve cells receptive field, the algorithm determines adaptively the receptive field of each pixel of the image. And determines adaptively the network parameters W, M, and β of PCNN by the Gabor mathematical model, which can overcome the problem of traditional PCNN parameter determination in the field of image segmentation. Experimental results show that the proposed algorithm can improve the region connectivity and edge regularity of segmentation image. And also show the PCNN of visual perception information for segmentation image of advantage.

  1. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  2. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  3. A region growing vessel segmentation algorithm based on spectrum information.

    PubMed

    Jiang, Huiyan; He, Baochun; Fang, Di; Ma, Zhiyuan; Yang, Benqiang; Zhang, Libo

    2013-01-01

    We propose a region growing vessel segmentation algorithm based on spectrum information. First, the algorithm does Fourier transform on the region of interest containing vascular structures to obtain its spectrum information, according to which its primary feature direction will be extracted. Then combined edge information with primary feature direction computes the vascular structure's center points as the seed points of region growing segmentation. At last, the improved region growing method with branch-based growth strategy is used to segment the vessels. To prove the effectiveness of our algorithm, we use the retinal and abdomen liver vascular CT images to do experiments. The results show that the proposed vessel segmentation algorithm can not only extract the high quality target vessel region, but also can effectively reduce the manual intervention.

  4. Touching Soma Segmentation Based on the Rayburst Sampling Algorithm.

    PubMed

    Hu, Tianyu; Xu, Qiufeng; Lv, Wei; Liu, Qian

    2017-09-22

    Neuronal soma segmentation is essential for morphology quantification analysis. Rapid advances in light microscope imaging techniques have generated such massive amounts of data that time-consuming manual methods cannot meet requirements for high throughput. However, touching soma segmentation is still a challenge for automatic segmentation methods. In this paper, we propose a soma segmentation method that combines the Rayburst sampling algorithm and ellipsoid fitting. The improved Rayburst sampling algorithm is used to detect the soma surface; the ellipsoid fitting method then refines jagged sampled soma surface to generate smooth ellipsoidal shapes for efficient analysis. In experiments, we validated the proposed method by applying it to datasets from the fluorescence micro-optical sectioning tomography (fMOST) system. The results indicate that the proposed method is comparable to the manual segmented gold standard with accurate soma segmentation at a relatively high speed. The proposed method can be extended to large-scale image stacks in the future.

  5. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  6. Segmentation of kidney using C-V model and anatomy priors

    NASA Astrophysics Data System (ADS)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  7. Robust and accurate star segmentation algorithm based on morphology

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Lei, Liu; Guangjun, Zhang

    2016-06-01

    Star tracker is an important instrument of measuring a spacecraft's attitude; it measures a spacecraft's attitude by matching the stars captured by a camera and those stored in a star database, the directions of which are known. Attitude accuracy of star tracker is mainly determined by star centroiding accuracy, which is guaranteed by complete star segmentation. Current algorithms of star segmentation cannot suppress different interferences in star images and cannot segment stars completely because of these interferences. To solve this problem, a new star target segmentation algorithm is proposed on the basis of mathematical morphology. The proposed algorithm utilizes the margin structuring element to detect small targets and the opening operation to suppress noises, and a modified top-hat transform is defined to extract stars. A combination of three different structuring elements is utilized to define a new star segmentation algorithm, and the influence of three different structural elements on the star segmentation results is analyzed. Experimental results show that the proposed algorithm can suppress different interferences and segment stars completely, thus providing high star centroiding accuracy.

  8. Algorithms For Segmentation Of Complex-Amplitude SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Ramalingam

    1993-01-01

    Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.

  9. Segmentation of thermographic images of hands using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghosh, Payel; Mitchell, Melanie; Gold, Judith

    2010-01-01

    This paper presents a new technique for segmenting thermographic images using a genetic algorithm (GA). The individuals of the GA also known as chromosomes consist of a sequence of parameters of a level set function. Each chromosome represents a unique segmenting contour. An initial population of segmenting contours is generated based on the learned variation of the level set parameters from training images. Each segmenting contour (an individual) is evaluated for its fitness based on the texture of the region it encloses. The fittest individuals are allowed to propagate to future generations of the GA run using selection, crossover and mutation. The dataset consists of thermographic images of hands of patients suffering from upper extremity musculo-skeletal disorders (UEMSD). Thermographic images are acquired to study the skin temperature as a surrogate for the amount of blood flow in the hands of these patients. Since entire hands are not visible on these images, segmentation of the outline of the hands on these images is typically performed by a human. In this paper several different methods have been tried for segmenting thermographic images: Gabor-wavelet-based texture segmentation method, the level set method of segmentation and our GA which we termed LSGA because it combines level sets with genetic algorithms. The results show a comparative evaluation of the segmentation performed by all the methods. We conclude that LSGA successfully segments entire hands on images in which hands are only partially visible.

  10. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  11. Segmentation algorithms for ear image data towards biomechanical studies.

    PubMed

    Ferreira, Ana; Gentil, Fernanda; Tavares, João Manuel R S

    2014-01-01

    In recent years, the segmentation, i.e. the identification, of ear structures in video-otoscopy, computerised tomography (CT) and magnetic resonance (MR) image data, has gained significant importance in the medical imaging area, particularly those in CT and MR imaging. Segmentation is the fundamental step of any automated technique for supporting the medical diagnosis and, in particular, in biomechanics studies, for building realistic geometric models of ear structures. In this paper, a review of the algorithms used in ear segmentation is presented. The review includes an introduction to the usually biomechanical modelling approaches and also to the common imaging modalities. Afterwards, several segmentation algorithms for ear image data are described, and their specificities and difficulties as well as their advantages and disadvantages are identified and analysed using experimental examples. Finally, the conclusions are presented as well as a discussion about possible trends for future research concerning the ear segmentation.

  12. Optimizing parameters of an open-source airway segmentation algorithm using different CT images.

    PubMed

    Nardelli, Pietro; Khan, Kashif A; Corvò, Alberto; Moore, Niamh; Murphy, Mary J; Twomey, Maria; O'Connor, Owen J; Kennedy, Marcus P; Estépar, Raúl San José; Maher, Michael M; Cantillon-Murphy, Pádraig

    2015-06-26

    Computed tomography (CT) helps physicians locate and diagnose pathological conditions. In some conditions, having an airway segmentation method which facilitates reconstruction of the airway from chest CT images can help hugely in the assessment of lung diseases. Many efforts have been made to develop airway segmentation algorithms, but methods are usually not optimized to be reliable across different CT scan parameters. In this paper, we present a simple and reliable semi-automatic algorithm which can segment tracheal and bronchial anatomy using the open-source 3D Slicer platform. The method is based on a region growing approach where trachea, right and left bronchi are cropped and segmented independently using three different thresholds. The algorithm and its parameters have been optimized to be efficient across different CT scan acquisition parameters. The performance of the proposed method has been evaluated on EXACT'09 cases and local clinical cases as well as on a breathing pig lung phantom using multiple scans and changing parameters. In particular, to investigate multiple scan parameters reconstruction kernel, radiation dose and slice thickness have been considered. Volume, branch count, branch length and leakage presence have been evaluated. A new method for leakage evaluation has been developed and correlation between segmentation metrics and CT acquisition parameters has been considered. All the considered cases have been segmented successfully with good results in terms of leakage presence. Results on clinical data are comparable to other teams' methods, as obtained by evaluation against the EXACT09 challenge, whereas results obtained from the phantom prove the reliability of the method across multiple CT platforms and acquisition parameters. As expected, slice thickness is the parameter affecting the results the most, whereas reconstruction kernel and radiation dose seem not to particularly affect airway segmentation. The system represents the first

  13. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  14. A hardware implementation of a relaxation algorithm to segment images

    NASA Technical Reports Server (NTRS)

    Loda, Antonio G.; Ranganath, Heggere S.

    1988-01-01

    Relaxation labelling is a mathematical technique frequently applied in image processing algorithms. In particular, it is extensively used for the purpose of segmenting images. The paper presents a hardware implementation of a segmentation algorithm, for images consisting of two regions, based on relaxation labelling. The algorithm determines, for each pixel, the probability that it should be labelled as belonging to a particular region, for all regions in the image. The label probabilities (labellings) of every pixel are iteratively updated, based on those of the pixel's neighbors, until they converge. The pixel is then assigned to the region correspondent to the maximum label probability. The system consists of a control unit and of a pipeline of segmentation stages. Each segmentation stage emulates in the hardware an iteration of the relaxation algorithm. The design of the segmentation stage is based on commercially available digital signal processing integrated circuits. Multiple iterations are accomplished by stringing stages together or by looping the output of a stage, or string of stages, to its input. The system interfaces with a generic host computer. Given the modularity of the architecture, performance can be enhanced by merely adding segmentation stages.

  15. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown.

  16. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    NASA Astrophysics Data System (ADS)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  17. A hybrid algorithm for the segmentation of books in libraries

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Lei, Liang

    2016-05-01

    This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.

  18. Based on the CSI regional segmentation indoor localization algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Xi; Lin, Wei; Lan, Jingwei

    2017-08-01

    To solve the problem of high cost and low accuracy, the method of Channel State Information (CSI) regional segmentation are proposed in the indoor positioning. Because Channel State Information (CSI) stability, and effective against multipath effect, we used the Channel State Information (CSI) to segment location area. The method Acquisition CSI the influence of different link to pinpoint the location of the area. Then the method can improve the accuracy of positioning, and reduce the cost of the fingerprint localization algorithm.

  19. Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.

  20. Mammographic images segmentation based on chaotic map clustering algorithm

    PubMed Central

    2014-01-01

    Background This work investigates the applicability of a novel clustering approach to the segmentation of mammographic digital images. The chaotic map clustering algorithm is used to group together similar subsets of image pixels resulting in a medically meaningful partition of the mammography. Methods The image is divided into pixels subsets characterized by a set of conveniently chosen features and each of the corresponding points in the feature space is associated to a map. A mutual coupling strength between the maps depending on the associated distance between feature space points is subsequently introduced. On the system of maps, the simulated evolution through chaotic dynamics leads to its natural partitioning, which corresponds to a particular segmentation scheme of the initial mammographic image. Results The system provides a high recognition rate for small mass lesions (about 94% correctly segmented inside the breast) and the reproduction of the shape of regions with denser micro-calcifications in about 2/3 of the cases, while being less effective on identification of larger mass lesions. Conclusions We can summarize our analysis by asserting that due to the particularities of the mammographic images, the chaotic map clustering algorithm should not be used as the sole method of segmentation. It is rather the joint use of this method along with other segmentation techniques that could be successfully used for increasing the segmentation performance and for providing extra information for the subsequent analysis stages such as the classification of the segmented ROI. PMID:24666766

  1. Mammographic images segmentation based on chaotic map clustering algorithm.

    PubMed

    Iacomi, Marius; Cascio, Donato; Fauci, Francesco; Raso, Giuseppe

    2014-03-25

    This work investigates the applicability of a novel clustering approach to the segmentation of mammographic digital images. The chaotic map clustering algorithm is used to group together similar subsets of image pixels resulting in a medically meaningful partition of the mammography. The image is divided into pixels subsets characterized by a set of conveniently chosen features and each of the corresponding points in the feature space is associated to a map. A mutual coupling strength between the maps depending on the associated distance between feature space points is subsequently introduced. On the system of maps, the simulated evolution through chaotic dynamics leads to its natural partitioning, which corresponds to a particular segmentation scheme of the initial mammographic image. The system provides a high recognition rate for small mass lesions (about 94% correctly segmented inside the breast) and the reproduction of the shape of regions with denser micro-calcifications in about 2/3 of the cases, while being less effective on identification of larger mass lesions. We can summarize our analysis by asserting that due to the particularities of the mammographic images, the chaotic map clustering algorithm should not be used as the sole method of segmentation. It is rather the joint use of this method along with other segmentation techniques that could be successfully used for increasing the segmentation performance and for providing extra information for the subsequent analysis stages such as the classification of the segmented ROI.

  2. Multiscale Unsupervised Segmentation of SAR Imagery Using the Genetic Algorithm

    PubMed Central

    Wen, Xian-Bin; Zhang, Hua; Jiang, Ze-Tao

    2008-01-01

    A valid unsupervised and multiscale segmentation of synthetic aperture radar (SAR) imagery is proposed by a combination GA-EM of the Expectation Maximization (EM) algorith with the genetic algorithm (GA). The mixture multiscale autoregressive (MMAR) model is introduced to characterize and exploit the scale-to-scale statistical variations and statistical variations in the same scale in SAR imagery due to radar speckle, and a segmentation method is given by combining the GA algorithm with the EM algorithm. This algorithm is capable of selecting the number of components of the model using the minimum description length (MDL) criterion. Our approach benefits from the properties of the Genetic and the EM algorithm by combination of both into a single procedure. The population-based stochastic search of the genetic algorithm (GA) explores the search space more thoroughly than the EM method. Therefore, our algorithm enables escaping from local optimal solutions since the algorithm becomes less sensitive to its initialization. Some experiment results are given based on our proposed approach, and compared to that of the EM algorithms. The experiments on the SAR images show that the GA-EM outperforms the EM method. PMID:27879787

  3. Multiscale Unsupervised Segmentation of SAR Imagery Using the Genetic Algorithm.

    PubMed

    Wen, Xian-Bin; Zhang, Hua; Jiang, Ze-Tao

    2008-03-12

    A valid unsupervised and multiscale segmentation of synthetic aperture radar(SAR) imagery is proposed by a combination GA-EM of the Expectation Maximization(EM) algorith with the genetic algorithm (GA). The mixture multiscale autoregressive(MMAR) model is introduced to characterize and exploit the scale-to-scale statisticalvariations and statistical variations in the same scale in SAR imagery due to radar speckle,and a segmentation method is given by combining the GA algorithm with the EMalgorithm. This algorithm is capable of selecting the number of components of the modelusing the minimum description length (MDL) criterion. Our approach benefits from theproperties of the Genetic and the EM algorithm by combination of both into a singleprocedure. The population-based stochastic search of the genetic algorithm (GA) exploresthe search space more thoroughly than the EM method. Therefore, our algorithm enablesescaping from local optimal solutions since the algorithm becomes less sensitive to itsinitialization. Some experiment results are given based on our proposed approach, andcompared to that of the EM algorithms. The experiments on the SAR images show that theGA-EM outperforms the EM method.

  4. Comparison of Model-Based Segmentation Algorithms for Color Images.

    DTIC Science & Technology

    1987-03-01

    image. Hunt and Kubler [Ref. 3] found that for image restoration, Karhunen-Loive transformation followed by single channel image processing worked...Algorithm for Segmentation of Multichannel Images. M.S.Thesis, Naval Postgraduate School, Monterey, CaliFornia, December 1993. 3. Hunt, B.R., Kubler 0

  5. a Review of Point Clouds Segmentation and Classification Algorithms

    NASA Astrophysics Data System (ADS)

    Grilli, E.; Menna, F.; Remondino, F.

    2017-02-01

    Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed.

  6. Split Bregman's algorithm for three-dimensional mesh segmentation

    NASA Astrophysics Data System (ADS)

    Habiba, Nabi; Ali, Douik

    2016-05-01

    Variational methods have attracted a lot of attention in the literature, especially for image and mesh segmentation. The methods aim at minimizing the energy to optimize both edge and region detections. We propose a spectral mesh decomposition algorithm to obtain disjoint but meaningful regions of an input mesh. The related optimization problem is nonconvex, and it is very difficult to find a good approximation or global optimum, which represents a challenge in computer vision. We propose an alternating split Bregman algorithm for mesh segmentation, where we extended the image-dedicated model to a three-dimensional (3-D) mesh one. By applying our scheme to 3-D mesh segmentation, we obtain fast solvers that can outperform various conventional ones, such as graph-cut and primal dual methods. A consistent evaluation of the proposed method on various public domain 3-D databases for different metrics is elaborated, and a comparison with the state-of-the-art is performed.

  7. Performance evaluation of a texture-based segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1991-07-01

    Texture segmentations are crucial components of many remote sensing, scene analysis, and object recognition systems. However, very little attention has been paid to the problem of performance evaluation in the numerous algorithms that have been proposed by the image understanding community. In this paper, a particular algorithm is introduced and its performance is evaluated in a systematic manner on a wide range of scene and scenarios. Both the algorithm and the methodology used in its evaluation have significance in numerous applications in the computer-based image understanding field.

  8. Modeling and segmentation of intra-cochlear anatomy in conventional CT

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Rutherford, Robert B.; Labadie, Robert F.; Majdani, Omid; Dawant, Benoit M.

    2010-03-01

    Cochlear implant surgery is a procedure performed to treat profound hearing loss. Since the cochlea is not visible in surgery, the physician uses anatomical landmarks to estimate the pose of the cochlea. Research has indicated that implanting the electrode in a particular cavity of the cochlea, the scala tympani, results in better hearing restoration. The success of the scala tympani implantation is largely dependent on the point of entry and angle of electrode insertion. Errors can occur due to the imprecise nature of landmark-based, manual navigation as well as inter-patient variations between scala tympani and the anatomical landmarks. In this work, we use point distribution models of the intra-cochlear anatomy to study the inter-patient variations between the cochlea and the typical anatomic landmarks, and we implement an active shape model technique to automatically localize intra-cochlear anatomy in conventional CT images, where intra-cochlear structures are not visible. This fully automatic segmentation could aid the surgeon to choose the point of entry and angle of approach to maximize the likelihood of scala tympani insertion, resulting in more substantial hearing restoration.

  9. AMASS: Algorithm for MSI Analysis by Semi-supervised Segmentation

    PubMed Central

    Bruand, Jocelyne; Alexandrov, Theodore; Sistla, Srinivas; Wisztorski, Maxence; Meriaux, Céline; Becker, Michael; Salzet, Michel; Fournier, Isabelle; Macagno, Eduardo; Bafna, Vineet

    2011-01-01

    Mass Spectrometric Imaging (MSI) is a molecular imaging technique that allows the generation of 2D ion density maps for a large complement of the active molecules present in cells and sectioned tissues. Automatic segmentation of such maps according to patterns of co-expression of individual molecules can be used for discovery of novel molecular signatures (molecules that are specifically expressed in particular spatial regions). However, current segmentation techniques are biased towards the discovery of higher abundance molecules and large segments; they allow limited opportunity for user interaction and validation is usually performed by similarity to known anatomical features. We describe here a novel method, AMASS (Algorithm for MSI Analysis by Semi-supervised Segmentation). AMASS relies on the discriminating power of a molecular signal instead of its intensity as a key feature, uses an internal consistency measure for validation, and allows significant user interaction and supervision as options. An automated segmentation of entire leech embryo data images resulted in segmentation domains congruent with many known organs, including heart, CNS ganglia, nephridia, nephridiopores, and lateral and ventral regions, each with a distinct molecular signature. Likewise, segmentation of a rat brain MSI slice data set yielded known brain features, and provided interesting examples of co-expression between distinct brain regions. AMASS represents a new approach for the discovery of peptide masses with distinct spatial features of expression. PMID:21800894

  10. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  11. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  12. CONVERGENCE BEHAVIOR OF THE ACTIVE MASK SEGMENTATION ALGORITHM

    PubMed Central

    Balcan, Doru C.; Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2010-01-01

    We study the convergence behavior of the Active Mask (AM) framework, originally designed for segmenting punctate image patterns. AM combines the flexibility of traditional active contours, the statistical modeling power of region-growing methods, and the computational efficiency of multiscale and multiresolution methods. Additionally, it achieves experimental convergence to zero-change (fixed-point) configurations, a desirable property for segmentation algorithms. At its a core lies a voting-based distributing function which behaves as a majority cellular automaton. This paper proposes an empirical measure correlated to the convergence behavior of AM, and provides sufficient theoretical conditions on the smoothing filter operator to enforce convergence. PMID:20657795

  13. Joint graph cut and relative fuzzy connectedness image segmentation algorithm.

    PubMed

    Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K

    2013-12-01

    We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC.

  14. Magnetic resonance segmentation with the bubble wave algorithm

    NASA Astrophysics Data System (ADS)

    Cline, Harvey E.; Ludke, Siegwalt

    2003-05-01

    A new bubble wave algorithm provides automatic segmentation of three-dimensional magnetic resonance images of both the peripheral vasculature and the brain. Simple connectivity algorithms are not reliable in these medical applications because there are unwanted connections through background noise. The bubble wave algorithm restricts connectivity using curvature by testing spherical regions on a propagating active contour to eliminate noise bridges. After the user places seeds in both the selected regions and in the regions that are not desired, the method provides the critical threshold for segmentation using binary search. Today, peripheral vascular disease is diagnosed using magnetic resonance imaging with a timed contrast bolus. A new blood pool contrast agent MS-325 (Epix Medical) binds to albumen in the blood and provides high-resolution three-dimensional images of both arteries and veins. The bubble wave algorithm provides a means to automatically suppress the veins that obscure the arteries in magnetic resonance angiography. Monitoring brain atrophy is needed for trials of drugs that retard the progression of dementia. The brain volume is measured by placing seeds in both the brain and scalp to find the critical threshold that prevents connections between the brain volume and the scalp. Examples from both three-dimensional magnetic resonance brain and contrast enhanced vascular images were segmented with minimal user intervention.

  15. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    PubMed Central

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-01-01

    Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive. PMID:28604641

  16. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    PubMed

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  17. Modified cuckoo search algorithm in microscopic image segmentation of hippocampus.

    PubMed

    Chakraborty, Shouvik; Chatterjee, Sankhadeep; Dey, Nilanjan; Ashour, Amira S; Ashour, Ahmed S; Shi, Fuqian; Mali, Kalyani

    2017-10-01

    Microscopic image analysis is one of the challenging tasks due to the presence of weak correlation and different segments of interest that may lead to ambiguity. It is also valuable in foremost meadows of technology and medicine. Identification and counting of cells play a vital role in features extraction to diagnose particular diseases precisely. Different segments should be identified accurately in order to identify and to count cells in a microscope image. Consequently, in the current work, a novel method for cell segmentation and identification has been proposed that incorporated marking cells. Thus, a novel method based on cuckoo search after pre-processing step is employed. The method is developed and evaluated on light microscope images of rats' hippocampus which used as a sample for the brain cells. The proposed method can be applied on the color images directly. The proposed approach incorporates the McCulloch's method for lévy flight production in cuckoo search (CS) algorithm. Several objective functions, namely Otsu's method, Kapur entropy and Tsallis entropy are used for segmentation. In the cuckoo search process, the Otsu's between class variance, Kapur's entropy and Tsallis entropy are employed as the objective functions to be optimized. Experimental results are validated by different metrics, namely the peak signal to noise ratio (PSNR), mean square error, feature similarity index and CPU running time for all the test cases. The experimental results established that the Kapur's entropy segmentation method based on the modified CS required the least computational time compared to Otsu's between-class variance segmentation method and the Tsallis entropy segmentation method. Nevertheless, Tsallis entropy method with optimized multi-threshold levels achieved superior performance compared to the other two segmentation methods in terms of the PSNR. © 2017 Wiley Periodicals, Inc.

  18. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  19. Facial Skin Segmentation Using Bacterial Foraging Optimization Algorithm

    PubMed Central

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-01-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%). PMID:23724370

  20. Facial skin segmentation using bacterial foraging optimization algorithm.

    PubMed

    Bakhshali, Mohamad Amin; Shamsi, Mousa

    2012-10-01

    Nowadays, analyzing human facial image has gained an ever-increasing importance due to its various applications. Image segmentation is required as a very important and fundamental operation for significant analysis and interpretation of images. Among the segmentation methods, image thresholding technique is one of the most well-known methods due to its simplicity, robustness, and high precision. Thresholding based on optimization of the objective function is among the best methods. Numerous methods exist for the optimization process and bacterial foraging optimization (BFO) is among the most efficient and novel ones. Using this method, optimal threshold is extracted and then segmentation of facial skin is performed. In the proposed method, first, the color facial image is converted from RGB color space to Improved Hue-Luminance-Saturation (IHLS) color space, because IHLS has a great mapping of the skin color. To perform thresholding, the entropy-based method is applied. In order to find the optimum threshold, BFO is used. In order to analyze the proposed algorithm, color images of the database of Sahand University of Technology of Tabriz, Iran were used. Then, using Otsu and Kapur methods, thresholding was performed. In order to have a better understanding from the proposed algorithm; genetic algorithm (GA) is also used for finding the optimum threshold. The proposed method shows the better results than other thresholding methods. These results include misclassification error accuracy (88%), non-uniformity accuracy (89%), and the accuracy of region's area error (89%).

  1. Accurate colon residue detection algorithm with partial volume segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.

    2004-05-01

    Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.

  2. Do Three-dimensional Visualization and Three-dimensional Printing Improve Hepatic Segment Anatomy Teaching? A Randomized Controlled Study.

    PubMed

    Kong, Xiangxue; Nie, Lanying; Zhang, Huijian; Wang, Zhanglin; Ye, Qiang; Tang, Lei; Li, Jianyi; Huang, Wenhua

    2016-01-01

    Hepatic segment anatomy is difficult for medical students to learn. Three-dimensional visualization (3DV) is a useful tool in anatomy teaching, but current models do not capture haptic qualities. However, three-dimensional printing (3DP) can produce highly accurate complex physical models. Therefore, in this study we aimed to develop a novel 3DP hepatic segment model and compare the teaching effectiveness of a 3DV model, a 3DP model, and a traditional anatomical atlas. A healthy candidate (female, 50-years old) was recruited and scanned with computed tomography. After three-dimensional (3D) reconstruction, the computed 3D images of the hepatic structures were obtained. The parenchyma model was divided into 8 hepatic segments to produce the 3DV hepatic segment model. The computed 3DP model was designed by removing the surrounding parenchyma and leaving the segmental partitions. Then, 6 experts evaluated the 3DV and 3DP models using a 5-point Likert scale. A randomized controlled trial was conducted to evaluate the educational effectiveness of these models compared with that of the traditional anatomical atlas. The 3DP model successfully displayed the hepatic segment structures with partitions. All experts agreed or strongly agreed that the 3D models provided good realism for anatomical instruction, with no significant differences between the 3DV and 3DP models in each index (p > 0.05). Additionally, the teaching effects show that the 3DV and 3DP models were significantly better than traditional anatomical atlas in the first and second examinations (p < 0.05). Between the first and second examinations, only the traditional method group had significant declines (p < 0.05). A novel 3DP hepatic segment model was successfully developed. Both the 3DV and 3DP models could improve anatomy teaching significantly. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Microsurgical anatomy of the extracerebral segment of recurrent artery of Heubner in the Mexican population.

    PubMed

    Gasca-González, Oscar Octavio; Delgado-Reyes, Luis; Pérez-Cruz, Julio César

    2011-01-01

    The recurrent artery of Heubner (RAH) is originated commonly from the anterior cerebral artery. Its extracerebral segment is directed toward the anterior perforate substance where it penetrates the cortex. The RAH was dissected from 15 human brains from Mexican population, and the presence, length, branches, course and either RHAs or anterior communicating artery complex variants were reported. The RAH was found in 93% of the hemispheres and duplicated in 39% of the hemispheres. The RAH was duplicated in at least a hemisphere in 46.6% of the brains; 40% of the brains had a RAH in every hemisphere. It was duplicated in every hemisphere in 20%. A single artery at a hemisphere was found in 26.6% and double at the other hemisphere. With a length between 13.6 and 36.7 mm (mean: 24.2 mm) and giving rise to 1-9 branches (mean: 3.9 branches), the RAH originated from the juxtacommunicating segment in 44% of the cases, from A2 in 41%, from A1 in 5% and as a branch of the frontopolar artery in 10%. It had an oblique course in 38%, an L in 31%, sinuous in 18% and an inverted L in 13%. In 53.3% of the brains, some variant of the anterior communicating artery complex was found. Because of the common anatomy of the RAH and its variants, we must consider the probability of finding it duplicated; therefore, it is necessary to make minute dissections of the region to locate or to secure the absence of the RAH.

  4. Sampling protein conformations using segment libraries and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gunn, John R.

    1997-03-01

    We present a new simulation algorithm for minimizing empirical contact potentials for a simplified model of protein structure. The model consists of backbone atoms only (including Cβ) with the φ and ψ dihedral angles as the only degrees of freedom. In addition, φ and ψ are restricted to a finite set of 532 discrete pairs of values, and the secondary structural elements are held fixed in ideal geometries. The potential function consists of a look-up table based on discretized inter-residue atomic distances. The minimization consists of two principal elements: the use of preselected lists of trial moves and the use of a genetic algorithm. The trial moves consist of substitutions of one or two complete loop regions, and the lists are in turn built up using preselected lists of randomly-generated three-residue segments. The genetic algorithm consists of mutation steps (namely, the loop replacements), as well as a hybridization step in which new structures are created by combining parts of two "parents'' and a selection step in which hybrid structures are introduced into the population. These methods are combined into a Monte Carlo simulated annealing algorithm which has the overall structure of a random walk on a restricted set of preselected conformations. The algorithm is tested using two types of simple model potential. The first uses global information derived from the radius of gyration and the rms deviation to drive the folding, whereas the second is based exclusively on distance-geometry constraints. The hierarchical algorithm significantly outperforms conventional Monte Carlo simulation for a set of test proteins in both cases, with the greatest advantage being for the largest molecule having 193 residues. When tested on a realistic potential function, the method consistently generates structures ranked lower than the crystal structure. The results also show that the improved efficiency of the hierarchical algorithm exceeds that which would be anticipated

  5. Aberrant Lower Extremity Arterial Anatomy in Microvascular Free Fibula Flap Candidates: Management Algorithm and Case Presentations.

    PubMed

    Golas, Alyssa R; Levine, Jamie P; Ream, Justin; Rodriguez, Eduardo D

    2016-10-14

    An accurate and comprehensive understanding of lower extremity arterial anatomy is essential for the successful harvest and transfer of a free fibula osteoseptocutaneous flap (FFF). Minimum preoperative evaluation includes detailed history and physical including lower extremity pulse examination. Controversy exists regarding whether preoperative angiographic imaging should be performed for all patients. Elevation of an FFF necessitates division of the peroneal artery in the proximal lower leg and eradicates its downstream flow. For patients in whom the peroneal artery comprises the dominant arterial supply to the foot, FFF elevation is contraindicated. Detailed preoperative knowledge of patient-specific lower extremity arterial anatomy can help to avoid ischemia or limb loss resulting from FFF harvest. If preoperative angiographic imaging is omitted, careful attention must be paid to intraoperative anatomy. Should pedal perfusion rely on the peroneal artery, reconstructive options other than an FFF must be pursued. Given the complexity of surgical decision making, the authors propose an algorithm to guide the surgeon from the preoperative evaluation of the potential free fibula flap patient to the final execution of the surgical plan. The authors also provide 3 clinical patients in whom aberrant lower extremity anatomy was encountered and describe each patient's surgical course.

  6. Aberrant Lower Extremity Arterial Anatomy in Microvascular Free Fibula Flap Candidates: Management Algorithm and Case Presentations.

    PubMed

    Golas, Alyssa R; Levine, Jamie P; Ream, Justin; Rodriguez, Eduardo D

    2016-11-01

    An accurate and comprehensive understanding of lower extremity arterial anatomy is essential for the successful harvest and transfer of a free fibula osteoseptocutaneous flap (FFF). Minimum preoperative evaluation includes detailed history and physical including lower extremity pulse examination. Controversy exists regarding whether preoperative angiographic imaging should be performed for all patients. Elevation of an FFF necessitates division of the peroneal artery in the proximal lower leg and eradicates its downstream flow. For patients in whom the peroneal artery comprises the dominant arterial supply to the foot, FFF elevation is contraindicated. Detailed preoperative knowledge of patient-specific lower extremity arterial anatomy can help to avoid ischemia or limb loss resulting from FFF harvest. If preoperative angiographic imaging is omitted, careful attention must be paid to intraoperative anatomy. Should pedal perfusion rely on the peroneal artery, reconstructive options other than an FFF must be pursued. Given the complexity of surgical decision making, the authors propose an algorithm to guide the surgeon from the preoperative evaluation of the potential free fibula flap patient to the final execution of the surgical plan. The authors also provide 3 clinical patients in whom aberrant lower extremity anatomy was encountered and describe each patient's surgical course.

  7. Guaranteeing Convergence of Iterative Skewed Voting Algorithms for Image Segmentation

    PubMed Central

    Balcan, Doru C.; Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2012-01-01

    In this paper we provide rigorous proof for the convergence of an iterative voting-based image segmentation algorithm called Active Masks. Active Masks (AM) was proposed to solve the challenging task of delineating punctate patterns of cells from fluorescence microscope images. Each iteration of AM consists of a linear convolution composed with a nonlinear thresholding; what makes this process special in our case is the presence of additive terms whose role is to “skew” the voting when prior information is available. In real-world implementation, the AM algorithm always converges to a fixed point. We study the behavior of AM rigorously and present a proof of this convergence. The key idea is to formulate AM as a generalized (parallel) majority cellular automaton, adapting proof techniques from discrete dynamical systems. PMID:22984338

  8. Breast mass contour segmentation algorithm in digital mammograms.

    PubMed

    Berber, Tolga; Alpkocak, Adil; Balci, Pinar; Dicle, Oguz

    2013-05-01

    Many computer aided diagnosis (CAD) systems help radiologist on difficult task of mass detection in a breast mammogram and, besides, they also provide interpretation about detected mass. One of the most crucial information of a mass is its shape and contour, since it provides valuable information about spread ability of a mass. However, accuracy of shape recognition of a mass highly related with the precision of detected mass contours. In this work, we introduce a new segmentation algorithm, breast mass contour segmentation, based on classical seed region growing algorithm to enhance contour of a mass from a given region of interest with ability to adjust threshold value adaptively. The new approach is evaluated over a dataset with 260 masses whose contours are manually annotated by expert radiologists. The performance of the method is evaluated with respect to a set of different evaluation metrics, such as specificity, sensitivity, balanced accuracy, Yassnoff and Hausdorrf error distances. The results obtained from experimentations shows that our method outperforms the other compared methods. All the findings and details of approach are presented in detail. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. K-region-based Clustering Algorithm for Image Segmentation

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Arthanariee, A. M.

    2013-12-01

    In this paper, authors have proposed K-region-based clustering algorithm which is based on performing the clustering techniques in K number of regions of given image of size N × N. The K and N are power of 2 and K < N. The authors have divided the given image into 4 regions, 16 regions, 64 regions, 256 regions, 1024 regions, 4096 regions and 16384 regions based on the value of K. Authors have grouped the adjacent pixels of similar intensity value into same cluster in each region. The clusters of similar values in each adjacent region are grouped together to form the bigger clusters. The authors have obtained the different segmented images based on the K number of regions. These segmented images are useful for image understanding. The authors have been taken four parameters: Probabilistic rand index, variation of information, global consistency error and boundary displacement error. These parameters have used to evaluate and analyze the performance of the K-region-based clustering algorithm.

  10. Practical contour segmentation algorithm for small animal digital radiography image

    NASA Astrophysics Data System (ADS)

    Zheng, Fang; Hui, Gong

    2008-12-01

    In this paper a practical, automated contour segmentation technique for digital radiography image is described. Digital radiography is an imaging mode based on the penetrability of x-ray. Unlike reflection imaging mode such as visible light camera, the pixel brightness represents the summation of the attenuations on the photon thoroughfare. It is not chromophotograph but gray scale picture. Contour extraction is of great importance in medical applications, especially in non-destructive inspection. Manual segmentation techniques include pixel selection, geometrical boundary selection and tracing. But it relies heavily on the experience of the operators, and is time-consuming. Some researchers try to find contours from the intensity jumping characters around them. However these characters also exist in the juncture of bone and soft tissue. The practical way is back to the primordial threshold algorithm. This research emphasizes on how to find the optimal threshold. A high resolution digital radiography system is used to provide the oriental gray scale image. A mouse is applied as the sample of this paper to show the feasibility of the algorithm.

  11. Crowdsourcing the creation of image segmentation algorithms for connectomics

    PubMed Central

    Arganda-Carreras, Ignacio; Turaga, Srinivas C.; Berger, Daniel R.; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M.; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M.; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D.; Bas, Erhan; Uzunbas, Mustafa G.; Cardona, Albert; Schindelin, Johannes; Seung, H. Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge. PMID:26594156

  12. Crowdsourcing the creation of image segmentation algorithms for connectomics.

    PubMed

    Arganda-Carreras, Ignacio; Turaga, Srinivas C; Berger, Daniel R; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D; Bas, Erhan; Uzunbas, Mustafa G; Cardona, Albert; Schindelin, Johannes; Seung, H Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.

  13. Bladder segmentation in MR images with watershed segmentation and graph cut algorithm

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Renisch, Steffen; Schadewaldt, Nicole; Schulz, Heinrich; Wiemker, Rafael

    2014-03-01

    Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.

  14. The implement of Talmud property allocation algorithm based on graphic point-segment way

    NASA Astrophysics Data System (ADS)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  15. Anatomy-based three-dimensional dose optimization in brachytherapy using multiobjective genetic algorithms.

    PubMed

    Lahanas, M; Baltas, D; Zamboglou, N

    1999-09-01

    In conventional dose optimization algorithms, in brachytherapy, multiple objectives are expressed in terms of an aggregating function which combines individual objective values into a single utility value, making the problem single objective, prior to optimization. A multiobjective genetic algorithm (MOGA) was developed for dose optimization based on an a posteriori approach, leaving the decision-making process to a planner and offering a representative trade-off surface of the various objectives. The MOGA provides a flexible search engine which provides the maximum of information for a decision maker. Tests performed with various treatment plans in brachytherapy have shown that MOGA gives solutions which are superior to those of traditional dose optimization algorithms. Objectives were proposed in terms of the COIN distribution and differential volume histograms, taking into account patient anatomy in the optimization process.

  16. Evaluation of synthetic aperture radar image segmentation algorithms in the context of automatic target recognition

    NASA Astrophysics Data System (ADS)

    Xue, Kefu; Power, Gregory J.; Gregga, Jason B.

    2002-11-01

    Image segmentation is a process to extract and organize information energy in the image pixel space according to a prescribed feature set. It is often a key preprocess in automatic target recognition (ATR) algorithms. In many cases, the performance of image segmentation algorithms will have significant impact on the performance of ATR algorithms. Due to the variations in feature set definitions and the innovations in the segmentation processes, there is large number of image segmentation algorithms existing in ATR world. Recently, the authors have investigated a number of measures to evaluate the performance of segmentation algorithms, such as Percentage Pixels Same (pps), Partial Directed Hausdorff (pdh) and Complex Inner Product (cip). In the research, we found that the combination of the three measures shows effectiveness in the evaluation of segmentation algorithms against truth data (human master segmentation). However, we still don't know what are the impact of those measures in the performance of ATR algorithms that are commonly measured by Probability of detection (PDet), Probability of false alarm (PFA), Probability of identification (PID), etc. In all practical situations, ATR boxes are implemented without human observer in the loop. The performance of synthetic aperture radar (SAR) image segmentation should be evaluated in the context of ATR rather than human observers. This research establishes a segmentation algorithm evaluation suite involving segmentation algorithm performance measures as well as the ATR algorithm performance measures. It provides a practical quantitative evaluation method to judge which SAR image segmentation algorithm is the best for a particular ATR application. The results are tabulated based on some baseline ATR algorithms and a typical image segmentation algorithm used in ATR applications.

  17. A Wavelet Relational Fuzzy C-Means Algorithm for 2D Gel Image Segmentation

    PubMed Central

    Rashwan, Shaheera; Faheem, Mohamed Talaat; Sarhan, Amany; Youssef, Bayumy A. B.

    2013-01-01

    One of the most famous algorithms that appeared in the area of image segmentation is the Fuzzy C-Means (FCM) algorithm. This algorithm has been used in many applications such as data analysis, pattern recognition, and image segmentation. It has the advantages of producing high quality segmentation compared to the other available algorithms. Many modifications have been made to the algorithm to improve its segmentation quality. The proposed segmentation algorithm in this paper is based on the Fuzzy C-Means algorithm adding the relational fuzzy notion and the wavelet transform to it so as to enhance its performance especially in the area of 2D gel images. Both proposed modifications aim to minimize the oversegmentation error incurred by previous algorithms. The experimental results of comparing both the Fuzzy C-Means (FCM) and the Wavelet Fuzzy C-Means (WFCM) to the proposed algorithm on real 2D gel images acquired from human leukemias, HL-60 cell lines, and fetal alcohol syndrome (FAS) demonstrate the improvement achieved by the proposed algorithm in overcoming the segmentation error. In addition, we investigate the effect of denoising on the three algorithms. This investigation proves that denoising the 2D gel image before segmentation can improve (in most of the cases) the quality of the segmentation. PMID:24174990

  18. A wavelet relational fuzzy C-means algorithm for 2D gel image segmentation.

    PubMed

    Rashwan, Shaheera; Faheem, Mohamed Talaat; Sarhan, Amany; Youssef, Bayumy A B

    2013-01-01

    One of the most famous algorithms that appeared in the area of image segmentation is the Fuzzy C-Means (FCM) algorithm. This algorithm has been used in many applications such as data analysis, pattern recognition, and image segmentation. It has the advantages of producing high quality segmentation compared to the other available algorithms. Many modifications have been made to the algorithm to improve its segmentation quality. The proposed segmentation algorithm in this paper is based on the Fuzzy C-Means algorithm adding the relational fuzzy notion and the wavelet transform to it so as to enhance its performance especially in the area of 2D gel images. Both proposed modifications aim to minimize the oversegmentation error incurred by previous algorithms. The experimental results of comparing both the Fuzzy C-Means (FCM) and the Wavelet Fuzzy C-Means (WFCM) to the proposed algorithm on real 2D gel images acquired from human leukemias, HL-60 cell lines, and fetal alcohol syndrome (FAS) demonstrate the improvement achieved by the proposed algorithm in overcoming the segmentation error. In addition, we investigate the effect of denoising on the three algorithms. This investigation proves that denoising the 2D gel image before segmentation can improve (in most of the cases) the quality of the segmentation.

  19. Conditional random pattern algorithm for LOH inference and segmentation.

    PubMed

    Wu, Ling-Yun; Zhou, Xiaobo; Li, Fuhai; Yang, Xiaorong; Chang, Chung-Che; Wong, Stephen T C

    2009-01-01

    Loss of heterozygosity (LOH) is one of the most important mechanisms in the tumor evolution. LOH can be detected from the genotypes of the tumor samples with or without paired normal samples. In paired sample cases, LOH detection for informative single nucleotide polymorphisms (SNPs) is straightforward if there is no genotyping error. But genotyping errors are always unavoidable, and there are about 70% non-informative SNPs whose LOH status can only be inferred from the neighboring informative SNPs. This article presents a novel LOH inference and segmentation algorithm based on the conditional random pattern (CRP) model. The new model explicitly considers the distance between two neighboring SNPs, as well as the genotyping error rate and the heterozygous rate. This new method is tested on the simulated and real data of the Affymetrix Human Mapping 500K SNP arrays. The experimental results show that the CRP method outperforms the conventional methods based on the hidden Markov model (HMM). Software is available upon request.

  20. CFD- and Bernoulli-based pressure drop estimates: A comparison using patient anatomies from heart and aortic valve segmentation of CT images.

    PubMed

    Weese, Jürgen; Lungu, Angela; Peters, Jochen; Weber, Frank M; Waechter-Stehle, Irina; Hose, D Rodney

    2017-06-01

    An aortic valve stenosis is an abnormal narrowing of the aortic valve (AV). It impedes blood flow and is often quantified by the geometric orifice area of the AV (AVA) and the pressure drop (PD). Using the Bernoulli equation, a relation between the PD and the effective orifice area (EOA) represented by the area of the vena contracta (VC) downstream of the AV can be derived. We investigate the relation between the AVA and the EOA using patient anatomies derived from cardiac computed tomography (CT) angiography images and computational fluid dynamic (CFD) simulations. We developed a shape-constrained deformable model for segmenting the AV, the ascending aorta (AA), and the left ventricle (LV) in cardiac CT images. In particular, we designed a structured AV mesh model, trained the model on CT scans, and integrated it with an available model for heart segmentation. The planimetric AVA was determined from the cross-sectional slice with minimum AV opening area. In addition, the AVA was determined as the nonobstructed area along the AV axis by projecting the AV leaflet rims on a plane perpendicular to the AV axis. The flow rate was derived from the LV volume change. Steady-state CFD simulations were performed on the patient anatomies resulting from segmentation. Heart and valve segmentation was used to retrospectively analyze 22 cardiac CT angiography image sequences of patients with noncalcified and (partially) severely calcified tricuspid AVs. Resulting AVAs were in the range of 1-4.5 cm(2) and ejection fractions (EFs) between 20 and 75%. AVA values computed by projection were smaller than those computed by planimetry, and both were strongly correlated (R(2) = 0.995). EOA values computed via the Bernoulli equation from CFD-based PD results were strongly correlated with both AVA values (R(2) = 0.97). EOA values were ∼10% smaller than planimetric AVA values. For EOA values < 2.0 cm(2) , the EOA was up to ∼15% larger than the projected AVA. The presented segmentation

  1. Linear segmentation algorithm for detecting layer boundary with lidar.

    PubMed

    Mao, Feiyue; Gong, Wei; Logan, Timothy

    2013-11-04

    The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.

  2. Algorithm based on marker-controlled watershed transform for overlapping plant fruit segmentation

    NASA Astrophysics Data System (ADS)

    Zeng, Qingbing; Miao, Yubin; Liu, Chengliang; Wang, Shiping

    2009-02-01

    Overlapping is a major problem for machine vision applications in agriculture. We present a robust marker-controlled watershed transform algorithm to automatically perform the accurate segmentation of overlapping plant fruits. The marker-controlled watershed algorithm mainly involves image preprocessing, marker extraction, and watershed transform. Marker extraction is the most important and difficult step of the whole process. Using K-means clustering, cut point decision making, spline interpolating, and morphological processing, markers can be detected automatically. Due to the good localization performance of detected markers, the accurate contour of separated fruits can be extracted by the watershed transform based on detected markers. The face validity of the segmentation algorithm is tested with a set of grape images, and segmentation results are overlaid onto original images for visual inspection. The algorithm is further evaluated by comparing segmentation results with a ``gold standard'' established by professional agronomists. Quantitative comparison shows that the segmentation algorithm can obtain very good spatial segmentation results.

  3. Computer algorithms for three-dimensional measurement of humeral anatomy: analysis of 140 paired humeri.

    PubMed

    Vlachopoulos, Lazaros; Dünner, Celestine; Gass, Tobias; Graf, Matthias; Goksel, Orcun; Gerber, Christian; Székely, Gábor; Fürnstahl, Philipp

    2016-02-01

    In the presence of severe osteoarthritis, osteonecrosis, or proximal humeral fracture, the contralateral humerus may serve as a template for the 3-dimensional (3D) preoperative planning of reconstructive surgery. The purpose of this study was to develop algorithms for performing 3D measurements of the humeral anatomy and further to assess side-to-side (bilateral) differences in humeral head retrotorsion, humeral head inclination, humeral length, and humeral head radius and height. The 3D models of 140 paired humeri (70 cadavers) were extracted from computed tomographic data. Geometric characteristics quantifying the humeral anatomy in 3D were determined in a semiautomatic fashion using the developed computer algorithms. The results between the sides were compared for evaluating bilateral differences. The mean bilateral difference of the humeral retrotorsion angle was 6.7° (standard deviation [SD], 5.7°; range, -15.1° to 24.0°; P = .063); the mean side difference of the humeral head inclination angle was 2.3° (SD, 1.8°; range, -5.1° to 8.4°; P = .12). The side difference in humeral length (mean, 2.9 mm; SD, 2.5 mm; range, -8.7 mm to 10.1 mm; P = .04) was significant. The mean side difference in the head sphere radius was 0.5 mm (SD, 0.6 mm; range, -3.2 mm to 2.2 mm; P = .76), and the mean side difference in humeral head height was 0.8 mm (SD, 0.6 mm; range, -2.4 mm to 2.4 mm; P = .44). The contralateral anatomy may serve as a reliable reconstruction template for humeral length, humeral head radius, and humeral head height if it is analyzed with 3D algorithms. In contrast, determining humeral head retrotorsion and humeral head inclination from the contralateral anatomy may be more prone to error. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  4. Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm

    PubMed Central

    2015-01-01

    Background Organ segmentation is an important step in computer-aided diagnosis and pathology detection. Accurate kidney segmentation in abdominal computed tomography (CT) sequences is an essential and crucial task for surgical planning and navigation in kidney tumor ablation. However, kidney segmentation in CT is a substantially challenging work because the intensity values of kidney parenchyma are similar to those of adjacent structures. Results In this paper, a coarse-to-fine method was applied to segment kidney from CT images, which consists two stages including rough segmentation and refined segmentation. The rough segmentation is based on a kernel fuzzy C-means algorithm with spatial information (SKFCM) algorithm and the refined segmentation is implemented with improved GrowCut (IGC) algorithm. The SKFCM algorithm introduces a kernel function and spatial constraint into fuzzy c-means clustering (FCM) algorithm. The IGC algorithm makes good use of the continuity of CT sequences in space which can automatically generate the seed labels and improve the efficiency of segmentation. The experimental results performed on the whole dataset of abdominal CT images have shown that the proposed method is accurate and efficient. The method provides a sensitivity of 95.46% with specificity of 99.82% and performs better than other related methods. Conclusions Our method achieves high accuracy in kidney segmentation and considerably reduces the time and labor required for contour delineation. In addition, the method can be expanded to 3D segmentation directly without modification. PMID:26356850

  5. Anatomy-Based Algorithms for Detecting Oral Cancer Using Reflectance and Fluorescence Spectroscopy

    PubMed Central

    McGee, Sasha; Mardirossian, Vartan; Elackattu, Alphi; Mirkovic, Jelena; Pistey, Robert; Gallagher, George; Kabani, Sadru; Yu, Chung-Chieh; Wang, Zimmern; Badizadegan, Kamran; Grillone, Gregory; Feld, Michael S.

    2010-01-01

    Objectives We used reflectance and fluorescence spectroscopy to noninvasively and quantitatively distinguish benign from dysplastic/malignant oral lesions. We designed diagnostic algorithms to account for differences in the spectral properties among anatomic sites (gingiva, buccal mucosa, etc). Methods In vivo reflectance and fluorescence spectra were collected from 71 patients with oral lesions. The tissue was then biopsied and the specimen evaluated by histopathology. Quantitative parameters related to tissue morphology and biochemistry were extracted from the spectra. Diagnostic algorithms specific for combinations of sites with similar spectral properties were developed. Results Discrimination of benign from dysplastic/malignant lesions was most successful when algorithms were designed for individual sites (area under the receiver operator characteristic curve [ROC-AUC], 0.75 for the lateral surface of the tongue) and was least accurate when all sites were combined (ROC-AUC, 0.60). The combination of sites with similar spectral properties (floor of mouth and lateral surface of the tongue) yielded an ROC-AUC of 0.71. Conclusions Accurate spectroscopic detection of oral disease must account for spectral variations among anatomic sites. Anatomy-based algorithms for single sites or combinations of sites demonstrated good diagnostic performance in distinguishing benign lesions from dysplastic/malignant lesions and consistently performed better than algorithms developed for all sites combined. PMID:19999369

  6. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  7. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  8. Understanding Spatially Complex Segmental and Branch Anatomy Using 3D Printing: Liver, Lung, Prostate, Coronary Arteries, and Circle of Willis.

    PubMed

    Javan, Ramin; Herrin, Douglas; Tangestanipoor, Ardalan

    2016-09-01

    Three-dimensional (3D) manufacturing is shaping personalized medicine, in which radiologists can play a significant role, be it as consultants to surgeons for surgical planning or by creating powerful visual aids for communicating with patients, physicians, and trainees. This report illustrates the steps in development of custom 3D models that enhance the understanding of complex anatomy. We graphically designed 3D meshes or modified imported data from cross-sectional imaging to develop physical models targeted specifically for teaching complex segmental and branch anatomy. The 3D printing itself is easily accessible through online commercial services, and the models are made of polyamide or gypsum. Anatomic models of the liver, lungs, prostate, coronary arteries, and the Circle of Willis were created. These models have advantages that include customizable detail, relative low cost, full control of design focusing on subsegments, color-coding potential, and the utilization of cross-sectional imaging combined with graphic design. Radiologists have an opportunity to serve as leaders in medical education and clinical care with 3D printed models that provide beneficial interaction with patients, clinicians, and trainees across all specialties by proactively taking on the educator's role. Complex models can be developed to show normal anatomy or common pathology for medical educational purposes. There is a need for randomized trials, which radiologists can design, to demonstrate the utility and effectiveness of 3D printed models for teaching simple and complex anatomy, simulating interventions, measuring patient satisfaction, and improving clinical care. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. A logarithmic opinion pool based STAPLE algorithm for the fusion of segmentations with associated reliability weights.

    PubMed

    Akhondi-Asl, Alireza; Hoyte, Lennox; Lockhart, Mark E; Warfield, Simon K

    2014-10-01

    Pelvic floor dysfunction is common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of its structures, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising segmentation techniques for these types of applications, but they have been limited by imperfections in the alignment of templates to the target, and by template segmentation errors. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison to nine state-of-the-art segmentation methods and demonstrated our algorithm achieves the highest performance.

  10. Wound size measurement of lower extremity ulcers using segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Pang, Xing; Solis, Elizabeth; Fang, Ruogu; Godavarty, Anuradha

    2016-03-01

    Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.

  11. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  12. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    SciTech Connect

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-11-15

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID{sup 2}PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID{sup 2}PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions.

  13. A Logarithmic Opinion Pool Based STAPLE Algorithm For The Fusion of Segmentations With Associated Reliability Weights

    PubMed Central

    Akhondi-Asl, Alireza; Hoyte, Lennox; Lockhart, Mark E.; Warfield, Simon K.

    2014-01-01

    Pelvic floor dysfunction is very common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of the structures of pelvic floor, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising techniques for segmentation of MRI in these types of applications, but these algorithms have been limited by imperfections in the alignment of each template to the target, and by template segmentation errors. In this class of segmentation techniques, a collection of templates is aligned to a target, and a new segmentation of the target is inferred. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out decision fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. The algorithm is a generalization of the STAPLE algorithm in which a reference segmentation is estimated and used to infer an optimal weighting for fusion of templates. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison

  14. Interactive algorithms for the segmentation and quantitation of 3-D MRI brain scans.

    PubMed

    Freeborough, P A; Fox, N C; Kitney, R I

    1997-05-01

    Interactive algorithms are an attractive approach to the accurate segmentation of 3D brain scans as they potentially improve the reliability of fully automated segmentation while avoiding the labour intensiveness and inaccuracies of manual segmentation. We present a 3D image analysis package (MIDAS) with a novel architecture enabling highly interactive segmentation algorithms to be implemented as add on modules. Interactive methods based on intensity thresholding, region growing and the constrained application of morphological operators are also presented. The methods involve the application of constraints and freedoms on the algorithms coupled with real time visualisation of the effect. This methodology has been applied to the segmentation, visualisation and measurement of the whole brain and a small irregular neuroanatomical structure, the hippocampus. We demonstrate reproducible and anatomically accurate segmentations of these structures. The efficacy of one method in measuring volume loss (atrophy) of the hippocampus in Alzheimer's disease is shown and is compared to conventional methods.

  15. Learning Likelihoods for Labeling (L3): A General Multi-Classifier Segmentation Algorithm

    PubMed Central

    Weisenfeld, Neil I.; Warfield, Simon K.

    2013-01-01

    PURPOSE To develop an MRI segmentation method for brain tissues, regions, and substructures that yields improved classification accuracy. Current brain segmentation strategies include two complementary strategies: multi-spectral classification and multi-template label fusion with individual strengths and weaknesses. METHODS We propose here a novel multi-classifier fusion algorithm with the advantages of both types of segmentation strategy. We illustrate and validate this algorithm using a group of 14 expertly hand-labeled images. RESULTS Our method generated segmentations of cortical and subcortical structures that were more similar to hand-drawn segmentations than majority vote label fusion or a recently published intensity/label fusion method. CONCLUSIONS We have presented a novel, general segmentation algorithm with the advantages of both statistical classifiers and label fusion techniques. PMID:22003715

  16. Skin cells segmentation algorithm based on spectral angle and distance score

    NASA Astrophysics Data System (ADS)

    Li, Qingli; Chang, Li; Liu, Hongying; Zhou, Mei; Wang, Yiting; Guo, Fangmin

    2015-11-01

    In the diagnosis of skin diseases by analyzing histopathological images of skin sections, the automated segmentation of cells in the epidermis area is an important step. Light microscopy based traditional methods usually cannot generate satisfying segmentation results due to complicated skin structures and limited information of this kind of image. In this study, we use a molecular hyperspectral imaging system to observe skin sections and propose a spectral based algorithm to segment epithelial cells. Unlike pixel-wise segmentation methods, the proposed algorithm considers both the spectral angle and the distance score between the test and the reference spectrum for segmentation. The experimental results indicate that the proposed algorithm performs better than the K-Means, fuzzy C-means, and spectral angle mapper algorithms because it can identify pixels with similar spectral angle but a different spectral distance.

  17. Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science

    PubMed Central

    Egger, Jan

    2014-01-01

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650

  18. Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science

    NASA Astrophysics Data System (ADS)

    Egger, Jan

    2014-06-01

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.

  19. Refinement-cut: user-guided segmentation algorithm for translational science.

    PubMed

    Egger, Jan

    2014-06-04

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.

  20. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding.

    PubMed

    Saleh, Marwan D; Eswaran, C

    2012-01-01

    Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.

  1. Temporal-based needle segmentation algorithm for transrectal ultrasound prostate biopsy procedures.

    PubMed

    Cool, Derek W; Gardi, Lori; Romagnoli, Cesare; Saikaly, Manale; Izawa, Jonathan I; Fenster, Aaron

    2010-04-01

    Automatic identification of the biopsy-core tissue location during a prostate biopsy procedure would provide verification that targets were adequately sampled and would allow for appropriate intraprocedure biopsy target modification. Localization of the biopsy core requires accurate segmentation of the biopsy needle and needle tip from transrectal ultrasound (TRUS) biopsy images. A temporal-based TRUS needle segmentation algorithm was developed specifically for the prostate biopsy procedure to automatically identify the TRUS image containing the biopsy needle from a collection of 2D TRUS images and to segment the biopsy-core location from the 2D TRUS image. The temporal-based segmentation algorithm performs a temporal analysis on a series of biopsy TRUS images collected throughout needle insertion and withdrawal. Following the identification of points of needle insertion and retraction, the needle axis is segmented using a Hough transform-based algorithm, which is followed by a temporospectral TRUS analysis to identify the biopsy-needle tip. Validation of the temporal-based algorithm is performed on 108 TRUS biopsy sequences collected from the procedures of ten patients. The success of the temporal search to identify the proper images was manually assessed, while the accuracies of the needle-axis and needle-tip segmentations were quantitatively compared to implementations of two other needle segmentation algorithms within the literature. The needle segmentation algorithm demonstrated a >99% accuracy in identifying the TRUS image at the moment of needle insertion from the collection of real-time TRUS images throughout the insertion and withdrawal of the biopsy needle. The segmented biopsy-needle axes were accurate to within 2.3 +/- 2.0 degrees and 0.48 +/- 0.42 mm of the gold standard. Identification of the needle tip to within half of the biopsy-core length (<10 mm) was 95% successful with a mean error of 2.4 +/- 4.0 mm. Needle-tip detection using the temporal

  2. Anatomy of the ostia venae hepaticae and the retrohepatic segment of the inferior vena cava.

    PubMed Central

    Camargo, A M; Teixeira, G G; Ortale, J R

    1996-01-01

    In 30 normal adult livers the retrohepatic segment of inferior vena cava had a length of 6.7 cm and was totally encircled by liver substance in 30% of cases. Altogether 442 ostia venae hepaticae were found, averaging 14.7 per liver and classified as large, medium, small and minimum. The localisation of the openings was studied according to the division of the wall of the retrohepatic segment of the inferior vena cava into 16 areas. PMID:8655416

  3. A Clustering Algorithm for Liver Lesion Segmentation of Diffusion-Weighted MR Images

    PubMed Central

    Jha, Abhinav K.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.

    2010-01-01

    In diffusion-weighted magnetic resonance imaging, accurate segmentation of liver lesions in the diffusion-weighted images is required for computation of the apparent diffusion coefficient (ADC) of the lesion, the parameter that serves as an indicator of lesion response to therapy. However, the segmentation problem is challenging due to low SNR, fuzzy boundaries and speckle and motion artifacts. We propose a clustering algorithm that incorporates spatial information and a geometric constraint to solve this issue. We show that our algorithm provides improved accuracy compared to existing segmentation algorithms. PMID:21151837

  4. An improved FSL-FIRST pipeline for subcortical gray matter segmentation to study abnormal brain anatomy using quantitative susceptibility mapping (QSM).

    PubMed

    Feng, Xiang; Deistung, Andreas; Dwyer, Michael G; Hagemeier, Jesper; Polak, Paul; Lebenberg, Jessica; Frouin, Frédérique; Zivadinov, Robert; Reichenbach, Jürgen R; Schweser, Ferdinand

    2017-02-07

    Accurate and robust segmentation of subcortical gray matter (SGM) nuclei is required in many neuroimaging applications. FMRIB's Integrated Registration and Segmentation Tool (FIRST) is one of the most popular software tools for automated subcortical segmentation based on T1-weighted (T1w) images. In this work, we demonstrate that FIRST tends to produce inaccurate SGM segmentation results in the case of abnormal brain anatomy, such as present in atrophied brains, due to a poor spatial match of the subcortical structures with the training data in the MNI space as well as due to insufficient contrast of SGM structures on T1w images. Consequently, such deviations from the average brain anatomy may introduce analysis bias in clinical studies, which may not always be obvious and potentially remain unidentified. To improve the segmentation of subcortical nuclei, we propose to use FIRST in combination with a special Hybrid image Contrast (HC) and Non-Linear (nl) registration module (HC-nlFIRST), where the hybrid image contrast is derived from T1w images and magnetic susceptibility maps to create subcortical contrast that is similar to that in the Montreal Neurological Institute (MNI) template. In our approach, a nonlinear registration replaces FIRST's default linear registration, yielding a more accurate alignment of the input data to the MNI template. We evaluated our method on 82 subjects with particularly abnormal brain anatomy, selected from a database of >2000 clinical cases. Qualitative and quantitative analyses revealed that HC-nlFIRST provides improved segmentation compared to the default FIRST method.

  5. Development and Validation of an Automatic Segmentation Algorithm for Quantification of Intracerebral Hemorrhage.

    PubMed

    Scherer, Moritz; Cordes, Jonas; Younsi, Alexander; Sahin, Yasemin-Aylin; Götz, Michael; Möhlenbruch, Markus; Stock, Christian; Bösel, Julian; Unterberg, Andreas; Maier-Hein, Klaus; Orakcioglu, Berk

    2016-11-01

    ABC/2 is still widely accepted for volume estimations in spontaneous intracerebral hemorrhage (ICH) despite known limitations, which potentially accounts for controversial outcome-study results. The aim of this study was to establish and validate an automatic segmentation algorithm, allowing for quick and accurate quantification of ICH. A segmentation algorithm implementing first- and second-order statistics, texture, and threshold features was trained on manual segmentations with a random-forest methodology. Quantitative data of the algorithm, manual segmentations, and ABC/2 were evaluated for agreement in a study sample (n=28) and validated in an independent sample not used for algorithm training (n=30). ABC/2 volumes were significantly larger compared with either manual or algorithm values, whereas no significant differences were found between the latter (P<0.0001; Friedman+Dunn's multiple comparison). Algorithm agreement with the manual reference was strong (concordance correlation coefficient 0.95 [lower 95% confidence interval 0.91]) and superior to ABC/2 (concordance correlation coefficient 0.77 [95% confidence interval 0.64]). Validation confirmed agreement in an independent sample (algorithm concordance correlation coefficient 0.99 [95% confidence interval 0.98], ABC/2 concordance correlation coefficient 0.82 [95% confidence interval 0.72]). The algorithm was closer to respective manual segmentations than ABC/2 in 52/58 cases (89.7%). An automatic segmentation algorithm for volumetric analysis of spontaneous ICH was developed and validated in this study. Algorithm measurements showed strong agreement with manual segmentations, whereas ABC/2 exhibited its limitations, yielding inaccurate overestimations of ICH volume. The refined, yet time-efficient, quantification of ICH by the algorithm may facilitate evaluation of clot volume as an outcome predictor and trigger for surgical interventions in the clinical setting. © 2016 American Heart Association, Inc.

  6. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  7. Magnetic resonance imaging segmentation techniques using batch-type learning vector quantization algorithms.

    PubMed

    Yang, Miin-Shen; Lin, Karen Chia-Ren; Liu, Hsiu-Chih; Lirng, Jiing-Feng

    2007-02-01

    In this article, we propose batch-type learning vector quantization (LVQ) segmentation techniques for the magnetic resonance (MR) images. Magnetic resonance imaging (MRI) segmentation is an important technique to differentiate abnormal and normal tissues in MR image data. The proposed LVQ segmentation techniques are compared with the generalized Kohonen's competitive learning (GKCL) methods, which were proposed by Lin et al. [Magn Reson Imaging 21 (2003) 863-870]. Three MRI data sets of real cases are used in this article. The first case is from a 2-year-old girl who was diagnosed with retinoblastoma in her left eye. The second case is from a 55-year-old woman who developed complete left side oculomotor palsy immediately after a motor vehicle accident. The third case is from an 84-year-old man who was diagnosed with Alzheimer disease (AD). Our comparisons are based on sensitivity of algorithm parameters, the quality of MRI segmentation with the contrast-to-noise ratio and the accuracy of the region of interest tissue. Overall, the segmentation results from batch-type LVQ algorithms present good accuracy and quality of the segmentation images, and also flexibility of algorithm parameters in all the comparison consequences. The results support that the proposed batch-type LVQ algorithms are better than the previous GKCL algorithms. Specifically, the proposed fuzzy-soft LVQ algorithm works well in segmenting AD MRI data set to accurately measure the hippocampus volume in AD MR images.

  8. Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT.

    PubMed

    Guo, Wei; Li, Qiang

    2014-09-01

    The purpose of this study is to reveal how the performance of lung nodule segmentation algorithm impacts the performance of lung nodule detection, and to provide guidelines for choosing an appropriate segmentation algorithm with appropriate parameters in a computer-aided detection (CAD) scheme. The database consisted of 85 CT scans with 111 nodules of 3 mm or larger in diameter from the standard CT lung nodule database created by the Lung Image Database Consortium. The initial nodule candidates were identified as those with strong response to a selective nodule enhancement filter. A uniform viewpoint reformation technique was applied to a three-dimensional nodule candidate to generate 24 two-dimensional (2D) reformatted images, which would be used to effectively distinguish between true nodules and false positives. Six different algorithms were employed to segment the initial nodule candidates in the 2D reformatted images. Finally, 2D features from the segmented areas in the 24 reformatted images were determined, selected, and classified for removal of false positives. Therefore, there were six similar CAD schemes, in which only the segmentation algorithms were different. The six segmentation algorithms included the fixed thresholding (FT), Otsu thresholding (OTSU), fuzzy C-means (FCM), Gaussian mixture model (GMM), Chan and Vese model (CV), and local binary fitting (LBF). The mean Jaccard index and the mean absolute distance (Dmean) were employed to evaluate the performance of segmentation algorithms, and the number of false positives at a fixed sensitivity was employed to evaluate the performance of the CAD schemes. For the segmentation algorithms of FT, OTSU, FCM, GMM, CV, and LBF, the highest mean Jaccard index between the segmented nodule and the ground truth were 0.601, 0.586, 0.588, 0.563, 0.543, and 0.553, respectively, and the corresponding Dmean were 1.74, 1.80, 2.32, 2.80, 3.48, and 3.18 pixels, respectively. With these segmentation results of the six

  9. Feedback algorithm for simulation of multi-segmented cracks

    SciTech Connect

    Chady, T.; Napierala, L.

    2011-06-23

    In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.

  10. Automated segmentation of tumors on bone scans using anatomy-specific thresholding

    NASA Astrophysics Data System (ADS)

    Chu, Gregory H.; Lo, Pechin; Kim, Hyun J.; Lu, Peiyun; Ramakrishna, Bharath; Gjertson, David; Poon, Cheryce; Auerbach, Martin; Goldin, Jonathan; Brown, Matthew S.

    2012-03-01

    Quantification of overall tumor area on bone scans may be a potential biomarker for treatment response assessment and has, to date, not been investigated. Segmentation of bone metastases on bone scans is a fundamental step for this response marker. In this paper, we propose a fully automated computerized method for the segmentation of bone metastases on bone scans, taking into account characteristics of different anatomic regions. A scan is first segmented into anatomic regions via an atlas-based segmentation procedure, which involves non-rigidly registering a labeled atlas scan to the patient scan. Next, an intensity normalization method is applied to account for varying levels of radiotracer dosing levels and scan timing. Lastly, lesions are segmented via anatomic regionspecific intensity thresholding. Thresholds are chosen by receiver operating characteristic (ROC) curve analysis against manual contouring by board certified nuclear medicine physicians. A leave-one-out cross validation of our method on a set of 39 bone scans with metastases marked by 2 board-certified nuclear medicine physicians yielded a median sensitivity of 95.5%, and specificity of 93.9%. Our method was compared with a global intensity thresholding method. The results show a comparable sensitivity and significantly improved overall specificity, with a p-value of 0.0069.

  11. A multiple-kernel fuzzy C-means algorithm for image segmentation.

    PubMed

    Chen, Long; Chen, C L Philip; Lu, Mingzhu

    2011-10-01

    In this paper, a generalized multiple-kernel fuzzy C-means (FCM) (MKFCM) methodology is introduced as a framework for image-segmentation problems. In the framework, aside from the fact that the composite kernels are used in the kernel FCM (KFCM), a linear combination of multiple kernels is proposed and the updating rules for the linear coefficients of the composite kernel are derived as well. The proposed MKFCM algorithm provides us a new flexible vehicle to fuse different pixel information in image-segmentation problems. That is, different pixel information represented by different kernels is combined in the kernel space to produce a new kernel. It is shown that two successful enhanced KFCM-based image-segmentation algorithms are special cases of MKFCM. Several new segmentation algorithms are also derived from the proposed MKFCM framework. Simulations on the segmentation of synthetic and medical images demonstrate the flexibility and advantages of MKFCM-based approaches.

  12. Methodology for the Evaluation of the Algorithms for Text Line Segmentation Based on Extended Binary Classification

    NASA Astrophysics Data System (ADS)

    Brodic, D.

    2011-01-01

    Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.

  13. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  14. Fast algorithm for optimal graph-Laplacian based 3D image segmentation

    NASA Astrophysics Data System (ADS)

    Harizanov, S.; Georgiev, I.

    2016-10-01

    In this paper we propose an iterative steepest-descent-type algorithm that is observed to converge towards the exact solution of the ℓ0 discrete optimization problem, related to graph-Laplacian based image segmentation. Such an algorithm allows for significant additional improvements on the segmentation quality once the minimizer of the associated relaxed ℓ1 continuous optimization problem is computed, unlike the standard strategy of simply hard-thresholding the latter. Convergence analysis of the algorithm is not a subject of this work. Instead, various numerical experiments, confirming the practical value of the algorithm, are documented.

  15. The new image segmentation algorithm using adaptive evolutionary programming and fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    2011-06-01

    Image segmentation remains one of the major challenges in image analysis and computer vision. Fuzzy clustering, as a soft segmentation method, has been widely studied and successfully applied in mage clustering and segmentation. The fuzzy c-means (FCM) algorithm is the most popular method used in mage segmentation. However, most clustering algorithms such as the k-means and the FCM clustering algorithms search for the final clusters values based on the predetermined initial centers. The FCM clustering algorithms does not consider the space information of pixels and is sensitive to noise. In the paper, presents a new fuzzy c-means (FCM) algorithm with adaptive evolutionary programming that provides image clustering. The features of this algorithm are: 1) firstly, it need not predetermined initial centers. Evolutionary programming will help FCM search for better center and escape bad centers at local minima. Secondly, the spatial distance and the Euclidean distance is also considered in the FCM clustering. So this algorithm is more robust to the noises. Thirdly, the adaptive evolutionary programming is proposed. The mutation rule is adaptively changed with learning the useful knowledge in the evolving process. Experiment results shows that the new image segmentation algorithm is effective. It is providing robustness to noisy images.

  16. An extended segment pattern dictionary for a pattern matching tracking algorithm at BESIII

    NASA Astrophysics Data System (ADS)

    Ma, Chang-Li; Zhang, Yao; Yuan, Ye; Lu, Xiao-Rui; Zheng, Yang-Heng; He, Kang-Li; Li, Wei-Dong; Liu, Huai-Min; Ma, Qiu-Mei; Wu, Ling-Hui

    2013-06-01

    A pattern matching based tracking algorithm, named MdcPatRec, is used for the reconstruction of charged tracks in the drift chamber of the BESIII detector. This paper addresses the shortage of segment finding in the MdcPatRec algorithm. An extended segment construction scheme and the corresponding pattern dictionary are presented. Evaluation with Monte-Carlo and experimental data show that the new method can achieve higher efficiency for low transverse momentum tracks.

  17. Terminal Segment Surgical Anatomy of the Rat Facial Nerve: Implications for Facial Reanimation Study

    PubMed Central

    Henstrom, Doug; Hadlock, Tessa; Lindsay, Robin; Knox, Christopher J.; Malo, Juan; Vakharia, Kalpesh T.; Heaton, James T.

    2015-01-01

    Introduction Rodent whisking behavior is supported by the buccal and mandibular branches of the facial nerve, a description of how these branches converge and contribute to whisker movement is lacking. Methods Eight rats underwent isolated transection of either the buccal or mandibular branch and subsequent opposite branch transection. Whisking function was analyzed following both transections. Anatomical measurements, and video recording of stimulation to individual branches, were taken from both facial nerves in 10 rats. Results Normal to near-normal whisking was demonstrated after isolated branch transection. Following transection of both branches whisking was eliminated. The buccal and mandibular branches form a convergence just proximal to the whisker-pad, named the “distal pes.” Distal to this convergence, we identified consistent anatomy that demonstrated cross-innervation. Conclusion The overlap of efferent supply to the whisker pad must be considered when studying facial nerve regeneration in the rat facial nerve model. PMID:22499096

  18. Detection and Segmentation of Erythrocytes in Blood Smear Images Using a Line Operator and Watershed Algorithm

    PubMed Central

    Khajehpour, Hassan; Dehnavi, Alireza Mehri; Taghizad, Hossein; Khajehpour, Esmat; Naeemabadi, Mohammadreza

    2013-01-01

    Most of the erythrocyte related diseases are detectable by hematology images analysis. At the first step of this analysis, segmentation and detection of blood cells are inevitable. In this study, a novel method using a line operator and watershed algorithm is rendered for erythrocyte detection and segmentation in blood smear images, as well as reducing over-segmentation in watershed algorithm that is useful for segmentation of different types of blood cells having partial overlap. This method uses gray scale structure of blood cell, which is obtained by exertion of Euclidian distance transform on binary images. Applying this transform, the gray intensity of cell images gradually reduces from the center of cells to their margins. For detecting this intensity variation structure, a line operator measuring gray level variations along several directional line segments is applied. Line segments have maximum and minimum gray level variations has a special pattern that is applicable for detections of the central regions of cells. Intersection of these regions with the signs which are obtained by calculating of local maxima in the watershed algorithm was applied for cells’ centers detection, as well as a reduction in over-segmentation of watershed algorithm. This method creates 1300 sign in segmentation of 1274 erythrocytes available in 25 blood smear images. Accuracy and sensitivity of the proposed method are equal to 95.9% and 97.99%, respectively. The results show the proposed method's capability in detection of erythrocytes in blood smear images. PMID:24672764

  19. Detection and segmentation of erythrocytes in blood smear images using a line operator and watershed algorithm.

    PubMed

    Khajehpour, Hassan; Dehnavi, Alireza Mehri; Taghizad, Hossein; Khajehpour, Esmat; Naeemabadi, Mohammadreza

    2013-07-01

    Most of the erythrocyte related diseases are detectable by hematology images analysis. At the first step of this analysis, segmentation and detection of blood cells are inevitable. In this study, a novel method using a line operator and watershed algorithm is rendered for erythrocyte detection and segmentation in blood smear images, as well as reducing over-segmentation in watershed algorithm that is useful for segmentation of different types of blood cells having partial overlap. This method uses gray scale structure of blood cell, which is obtained by exertion of Euclidian distance transform on binary images. Applying this transform, the gray intensity of cell images gradually reduces from the center of cells to their margins. For detecting this intensity variation structure, a line operator measuring gray level variations along several directional line segments is applied. Line segments have maximum and minimum gray level variations has a special pattern that is applicable for detections of the central regions of cells. Intersection of these regions with the signs which are obtained by calculating of local maxima in the watershed algorithm was applied for cells' centers detection, as well as a reduction in over-segmentation of watershed algorithm. This method creates 1300 sign in segmentation of 1274 erythrocytes available in 25 blood smear images. Accuracy and sensitivity of the proposed method are equal to 95.9% and 97.99%, respectively. The results show the proposed method's capability in detection of erythrocytes in blood smear images.

  20. PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    PubMed Central

    Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  1. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  2. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    PubMed

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-21

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  3. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    NASA Astrophysics Data System (ADS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  4. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method.

  5. A review of algorithms for medical image segmentation and their applications to the female pelvic cavity.

    PubMed

    Ma, Zhen; Tavares, João Manuel R S; Jorge, Renato Natal; Mascarenhas, T

    2010-01-01

    This paper aims to make a review on the current segmentation algorithms used for medical images. Algorithms are classified according to their principal methodologies, namely the ones based on thresholds, the ones based on clustering techniques and the ones based on deformable models. The last type is focused on due to the intensive investigations into the deformable models that have been done in the last few decades. Typical algorithms of each type are discussed and the main ideas, application fields, advantages and disadvantages of each type are summarised. Experiments that apply these algorithms to segment the organs and tissues of the female pelvic cavity are presented to further illustrate their distinct characteristics. In the end, the main guidelines that should be considered for designing the segmentation algorithms of the pelvic cavity are proposed.

  6. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  7. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  8. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    NASA Astrophysics Data System (ADS)

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-02-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation).

  9. New CSC segment builder algorithm with Monte-Carlo TeV muons in CMS experiment

    NASA Astrophysics Data System (ADS)

    Palichik, V.; Voytishin, N.

    2017-09-01

    The performance of the new Cathode Strip Chamber segment builder algorithm with simulated TeV muons is considered. The comparison of some of the main reconstruction characteristics is made. Some case study events are visualized in order to illustrate the improvement that the new algorithm gives to the reconstruction process.

  10. Nasal Anatomy

    MedlinePlus

    ... Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure Statement CONDITIONS Adult ... Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure Statement Printer Friendly ...

  11. Implementation of a new segmentation algorithm using the Eye-RIS CMOS vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Arena, Paolo; De Fiore, Sebastiano; Vagliasindi, Guido; Fortuna, Luigi; Arik, Sabri

    2009-05-01

    Segmentation is the process of representing a digital image into multiple meaningful regions. Since these applications require more computational power in real time applications, we have implemented a new segmentation algorithm using the capabilities of Eye-RIS Vision System to execute the algorithm in very short time. The segmentation algorithm is implemented mainly in three steps. In the first step, which is pre-processing step, the images are acquired and noise filtering through Gaussian function is performed. In the second step, Sobel operators based edge detection approach is implemented on the system. In the last step, morphologic and logic operations are used to segment the images as post processing. The experimental results performed for different images show the accuracy of the proposed segmentation algorithm. Visual inspection and timing analysis (7.83 ms, 127 frame/sec) prove that the proposed segmentation algorithm can be executed for real time video processing applications. Also, these results prove the capability of Eye-RIS Vision System for real time image processing applications

  12. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  13. LoAd: A locally adaptive cortical segmentation algorithm

    PubMed Central

    Cardoso, M. Jorge; Clarkson, Matthew J.; Ridgway, Gerard R.; Modat, Marc; Fox, Nick C.; Ourselin, Sebastien

    2012-01-01

    Thickness measurements of the cerebral cortex can aid diagnosis and provide valuable information about the temporal evolution of diseases such as Alzheimer's, Huntington's, and schizophrenia. Methods that measure the thickness of the cerebral cortex from in-vivo magnetic resonance (MR) images rely on an accurate segmentation of the MR data. However, segmenting the cortex in a robust and accurate way still poses a challenge due to the presence of noise, intensity non-uniformity, partial volume effects, the limited resolution of MRI and the highly convoluted shape of the cortical folds. Beginning with a well-established probabilistic segmentation model with anatomical tissue priors, we propose three post-processing refinements: a novel modification of the prior information to reduce segmentation bias; introduction of explicit partial volume classes; and a locally varying MRF-based model for enhancement of sulci and gyri. Experiments performed on a new digital phantom, on BrainWeb data and on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) show statistically significant improvements in Dice scores and PV estimation (p<10−3) and also increased thickness estimation accuracy when compared to three well established techniques. PMID:21316470

  14. Coupling Regular Tessellation with Rjmcmc Algorithm to Segment SAR Image with Unknown Number of Classes

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, Y.; Zhao, Q. H.

    2016-06-01

    This paper presents a Synthetic Aperture Radar (SAR) image segmentation approach with unknown number of classes, which is based on regular tessellation and Reversible Jump Markov Chain Monte Carlo (RJMCMC') algorithm. First of all, an image domain is portioned into a set of blocks by regular tessellation. The image is modeled on the assumption that intensities of its pixels in each homogeneous region satisfy an identical and independent Gamma distribution. By Bayesian paradigm, the posterior distribution is obtained to build the region-based image segmentation model. Then, a RJMCMC algorithm is designed to simulate from the segmentation model to determine the number of homogeneous regions and segment the image. In order to further improve the segmentation accuracy, a refined operation is performed. To illustrate the feasibility and effectiveness of the proposed approach, two real SAR image is tested.

  15. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  16. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  17. Tissue segmentation of Computed Tomography images using a Random Forest algorithm: a feasibility study

    PubMed Central

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2n, (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86 ± 0.03 for pediatric patient protocols, and 0.85 ± 0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82–0.98), specificity of 0.89 (range: 0.70–0.98), and accuracy of 0.90 (range: 0.76–0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus

  18. Improvement of phase unwrapping algorithm based on image segmentation and merging

    NASA Astrophysics Data System (ADS)

    Wang, Huaying; Liu, Feifei; Zhu, Qiaofen

    2013-11-01

    A modified algorithm based on image segmentation and merging is proposed and demonstrated to improve the accuracy of the phase unwrapping algorithm. There are three improved aspects. Firstly, the method of unequal region segmentation is taken, which can make the regional information to be completely and accurately reproduced. Secondly, for the condition of noise and undersampling in different regions, different phase unwrapping algorithms are used, respectively. Lastly, for the sake of improving the accuracy of the phase unwrapping results, a method of weighted stack is applied to the overlapping region originated from blocks merging. The proposed algorithm has been verified by simulations and experiments. The results not only validate the accuracy and rapidity of the improved algorithm to recover the phase information of the measured object, but also illustrate the importance of the improved algorithm in Traditional Chinese Medicine Decoction Pieces cell identification.

  19. Contour detection and completion for inpainting and segmentation based on topological gradient and fast marching algorithms.

    PubMed

    Auroux, Didier; Cohen, Laurent D; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm.

  20. Moving object segmentation algorithm based on cellular neural networks in the H.264 compressed domain

    NASA Astrophysics Data System (ADS)

    Feng, Jie; Chen, Yaowu; Tian, Xiang

    2009-07-01

    A cellular neural network (CNN)-based moving object segmentation algorithm in the H.264 compressed domain is proposed. This algorithm mainly utilizes motion vectors directly extracted from H.264 bitstreams. To improve the robustness of the motion vector information, the intramodes in I-frames are used for smooth and nonsmooth region classification, and the residual coefficient energy of P-frames is used to update the classification results first. Then, an adaptive motion vector filter is used according to interpartition modes. Finally, many CNN models are applied to implement moving object segmentation based on motion vector fields. Experiment results are presented to verify the efficiency and the robustness of this algorithm.

  1. Genetic algorithm based deliverable segments optimization for static intensity-modulated radiotherapy.

    PubMed

    Li, Yongjie; Yao, Jonathan; Yao, Dezhong

    2003-10-21

    The static delivery technique (also called step-and-shoot technique) has been widely used in intensity-modulated radiotherapy (IMRT) because of the simple delivery and easy quality assurance. Conventional static IMRT consists of two steps: first to calculate the intensity-modulated beam profiles using an inverse planning algorithm, and then to translate these profiles into a series of uniform segments using a leaf-sequencing tool. In order to simplify the procedure and shorten the treatment time of the static mode, an efficient technique, called genetic algorithm based deliverable segments optimization (GADSO), is developed in our work, which combines these two steps into one. Taking the pre-defined beams and the total number of segments per treatment as input, the number of segments for each beam, the segment shapes and weights are determined automatically. A group of interim modulated beam profiles quickly calculated using a conjugate gradient (CG) method are used to determine the segment number for each beam and to initialize segment shapes. A modified genetic algorithm based on a two-dimensional binary coding scheme is used to optimize the segment shapes, and a CG method is used to optimize the segment weights. The physical characters of a multileaf collimator, such as the leaves interdigitation limitation and leaves maximum over-travel distance, are incorporated into the optimization. The algorithm is applied to some examples and the results demonstrate that GADSO is able to produce highly conformal dose distributions using 20-30 deliverable segments per treatment within a clinically acceptable computation time.

  2. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue

  3. A Novel Histogram Region Merging Based Multithreshold Segmentation Algorithm for MR Brain Images

    PubMed Central

    Shen, Xuanjing; Feng, Yuncong

    2017-01-01

    Multithreshold segmentation algorithm is time-consuming, and the time complexity will increase exponentially with the increase of thresholds. In order to reduce the time complexity, a novel multithreshold segmentation algorithm is proposed in this paper. First, all gray levels are used as thresholds, so the histogram of the original image is divided into 256 small regions, and each region corresponds to one gray level. Then, two adjacent regions are merged in each iteration by a new designed scheme, and a threshold is removed each time. To improve the accuracy of the merger operation, variance and probability are used as energy. No matter how many the thresholds are, the time complexity of the algorithm is stable at O(L). Finally, the experiment is conducted on many MR brain images to verify the performance of the proposed algorithm. Experiment results show that our method can reduce the running time effectively and obtain segmentation results with high accuracy. PMID:28408922

  4. Phasing the mirror segments of the Keck telescopes II: the narrow-band phasing algorithm.

    PubMed

    Chanan, G; Ohara, C; Troy, M

    2000-09-01

    In a previous paper, we described a successful technique, the broadband algorithm, for phasing the primary mirror segments of the Keck telescopes to an accuracy of 30 nm. Here we describe a complementary narrow-band algorithm. Although it has a limited dynamic range, it is much faster than the broadband algorithm and can achieve an unprecedented phasing accuracy of approximately 6 nm. Cross checks between these two independent techniques validate both methods to a high degree of confidence. Both algorithms converge to the edge-minimizing configuration of the segmented primary mirror, which is not the same as the overall wave-front-error-minimizing configuration, but we demonstrate that this distinction disappears as the segment aberrations are reduced to zero.

  5. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  6. Automated segmentation and reconstruction of patient-specific cardiac anatomy and pathology from in vivo MRI*

    NASA Astrophysics Data System (ADS)

    Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey

    2012-12-01

    This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.

  7. Algorithms for automatic segmentation of bovine embryos produced in vitro

    NASA Astrophysics Data System (ADS)

    Melo, D. H.; Nascimento, M. Z.; Oliveira, D. L.; Neves, L. A.; Annes, K.

    2014-03-01

    In vitro production has been employed in bovine embryos and quantification of lipids is fundamental to understand the metabolism of these embryos. This paper presents a unsupervised segmentation method for histological images of bovine embryos. In this method, the anisotropic filter was used in the differents RGB components. After pre-processing step, the thresholding technique based on maximum entropy was applied to separate lipid droplets in the histological slides in different stages: early cleavage, morula and blastocyst. In the postprocessing step, false positives are removed using the connected components technique that identify regions with excess of dye near pellucid zone. The proposed segmentation method was applied in 30 histological images of bovine embryos. Experiments were performed with the images and statistical measures of sensitivity, specificity and accuracy were calculated based on reference images (gold standard). The value of accuracy of the proposed method was 96% with standard deviation of 3%.

  8. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  9. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  10. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  11. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  12. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  13. Comparison of vessel enhancement algorithms applied to Time-of-Flight MRA images for cerebrovascular segmentation.

    PubMed

    Phellan, Renzo; Forkert, Nils D

    2017-09-07

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented ux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from _ve healthy subjects and ten patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting non-enhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  14. Simulation of MR angiography imaging for validation of cerebral arteries segmentation algorithms.

    PubMed

    Klepaczko, Artur; Szczypiński, Piotr; Deistung, Andreas; Reichenbach, Jürgen R; Materka, Andrzej

    2016-12-01

    Accurate vessel segmentation of magnetic resonance angiography (MRA) images is essential for computer-aided diagnosis of cerebrovascular diseases such as stenosis or aneurysm. The ability of a segmentation algorithm to correctly reproduce the geometry of the arterial system should be expressed quantitatively and observer-independently to ensure objectivism of the evaluation. This paper introduces a methodology for validating vessel segmentation algorithms using a custom-designed MRA simulation framework. For this purpose, a realistic reference model of an intracranial arterial tree was developed based on a real Time-of-Flight (TOF) MRA data set. With this specific geometry blood flow was simulated and a series of TOF images was synthesized using various acquisition protocol parameters and signal-to-noise ratios. The synthesized arterial tree was then reconstructed using a level-set segmentation algorithm available in the Vascular Modeling Toolkit (VMTK). Moreover, to present versatile application of the proposed methodology, validation was also performed for two alternative techniques: a multi-scale vessel enhancement filter and the Chan-Vese variant of the level-set-based approach, as implemented in the Insight Segmentation and Registration Toolkit (ITK). The segmentation results were compared against the reference model. The accuracy in determining the vessels centerline courses was very high for each tested segmentation algorithm (mean error rate = 5.6% if using VMTK). However, the estimated radii exhibited deviations from ground truth values with mean error rates ranging from 7% up to 79%, depending on the vessel size, image acquisition and segmentation method. We demonstrated the practical application of the designed MRA simulator as a reliable tool for quantitative validation of MRA image processing algorithms that provides objective, reproducible results and is observer independent. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  16. Automatic segmentation of lesion from breast DCE-MR image using artificial fish swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Janaki, Sathya D.; Geetha, K.

    2017-06-01

    Interpreting Dynamic Contrast-Enhanced (DCE) MR images for signs of breast cancer is time consuming and complex, since the amount of data that needs to be examined by a radiologist in breast DCE-MRI to locate suspicious lesions is huge. Misclassifications can arise from either overlooking a suspicious region or from incorrectly interpreting a suspicious region. The segmentation of breast DCE-MRI for suspicious lesions in detection is thus attractive, because it drastically decreases the amount of data that needs to be examined. The new segmentation method for detection of suspicious lesions in DCE-MRI of the breast tissues is based on artificial fishes swarm clustering algorithm is presented in this paper. Artificial fish swarm optimization algorithm is a swarm intelligence algorithm, which performs a search based on population and neighborhood search combined with random search. The major criteria for segmentation are based on the image voxel values and the parameters of an empirical parametric model of segmentation algorithms. The experimental results show considerable impact on the performance of the segmentation algorithm, which can assist the physician with the task of locating suspicious regions at minimal time.

  17. Computer-assisted liver tumor surgery using a novel semiautomatic and a hybrid semiautomatic segmentation algorithm.

    PubMed

    Zygomalas, Apollon; Karavias, Dionissios; Koutsouris, Dimitrios; Maroulis, Ioannis; Karavias, Dimitrios D; Giokas, Konstantinos; Megalooikonomou, Vasileios

    2016-05-01

    We developed a medical image segmentation and preoperative planning application which implements a semiautomatic and a hybrid semiautomatic liver segmentation algorithm. The aim of this study was to evaluate the feasibility of computer-assisted liver tumor surgery using these algorithms which are based on thresholding by pixel intensity value from initial seed points. A random sample of 12 patients undergoing elective high-risk hepatectomies at our institution was prospectively selected to undergo computer-assisted surgery using our algorithms (June 2013-July 2014). Quantitative and qualitative evaluation was performed. The average computer analysis time (segmentation, resection planning, volumetry, visualization) was 45 min/dataset. The runtime for the semiautomatic algorithm was <0.2 s/slice. Liver volumetric segmentation using the hybrid method was achieved in 12.9 s/dataset (SD ± 6.14). Mean similarity index was 96.2 % (SD ± 1.6). The future liver remnant volume calculated by the application showed a correlation of 0.99 to that calculated using manual boundary tracing. The 3D liver models and the virtual liver resections had an acceptable coincidence with the real intraoperative findings. The patient-specific 3D models produced using our semiautomatic and hybrid semiautomatic segmentation algorithms proved to be accurate for the preoperative planning in liver tumor surgery and effectively enhanced the intraoperative medical image guidance.

  18. Fuzzy C-Means Algorithm for Segmentation of Aerial Photography Data Obtained Using Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Akinin, M. V.; Akinina, N. V.; Klochkov, A. Y.; Nikiforov, M. B.; Sokolova, A. V.

    2015-05-01

    The report reviewed the algorithm fuzzy c-means, performs image segmentation, give an estimate of the quality of his work on the criterion of Xie-Beni, contain the results of experimental studies of the algorithm in the context of solving the problem of drawing up detailed two-dimensional maps with the use of unmanned aerial vehicles. According to the results of the experiment concluded that the possibility of applying the algorithm in problems of decoding images obtained as a result of aerial photography. The considered algorithm can significantly break the original image into a plurality of segments (clusters) in a relatively short period of time, which is achieved by modification of the original k-means algorithm to work in a fuzzy task.

  19. Lung nodule volumetry: segmentation algorithms within the same software package cannot be used interchangeably.

    PubMed

    Ashraf, H; de Hoop, B; Shaker, S B; Dirksen, A; Bach, K S; Hansen, H; Prokop, M; Pedersen, J H

    2010-08-01

    We examined the reproducibility of lung nodule volumetry software that offers three different volumetry algorithms. In a lung cancer screening trial, 188 baseline nodules >5 mm were identified. Including follow-ups, these nodules formed a study-set of 545 nodules. Nodules were independently double read by two readers using commercially available volumetry software. The software offers readers three different analysing algorithms. We compared the inter-observer variability of nodule volumetry when the readers used the same and different algorithms. Both readers were able to correctly segment and measure 72% of nodules. In 80% of these cases, the readers chose the same algorithm. When readers used the same algorithm, exactly the same volume was measured in 50% of readings and a difference of >25% was observed in 4%. When the readers used different algorithms, 83% of measurements showed a difference of >25%. Modern volumetric software failed to correctly segment a high number of screen detected nodules. While choosing a different algorithm can yield better segmentation of a lung nodule, reproducibility of volumetric measurements deteriorates substantially when different algorithms were used. It is crucial even in the same software package to choose identical parameters for follow-up.

  20. A hybrid algorithm for instant optimization of beam weights in anatomy-based intensity modulated radiotherapy: A performance evaluation study.

    PubMed

    Vaitheeswaran, Ranganathan; Sathiya, Narayanan V K; Bhangle, Janhavi R; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram

    2011-04-01

    The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm; (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm; (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) [~ 2% - 5% improvement] and Homogeneity Index (HI) [~ 4% - 10% improvement] as compared to GEM and FSA algorithms; (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are

  1. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of

  2. Side scan sonar image segmentation based on neutrosophic set and quantum-behaved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhu; Wang, Xiao; Zhang, Hongmei; Hu, Jun; Jian, Xiaomin

    2016-09-01

    To fulfill side scan sonar (SSS) image segmentation accurately and efficiently, a novel segmentation algorithm based on neutrosophic set (NS) and quantum-behaved particle swarm optimization (QPSO) is proposed in this paper. Firstly, the neutrosophic subset images are obtained by transforming the input image into the NS domain. Then, a co-occurrence matrix is accurately constructed based on these subset images, and the entropy of the gray level image is described to serve as the fitness function of the QPSO algorithm. Moreover, the optimal two-dimensional segmentation threshold vector is quickly obtained by QPSO. Finally, the contours of the interested target are segmented with the threshold vector and extracted by the mathematic morphology operation. To further improve the segmentation efficiency, the single threshold segmentation, an alternative algorithm, is recommended for the shadow segmentation by considering the gray level characteristics of the shadow. The accuracy and efficiency of the proposed algorithm are assessed with experiments of SSS image segmentation.

  3. An improved Marching Cube algorithm for 3D data segmentation

    NASA Astrophysics Data System (ADS)

    Masala, G. L.; Golosio, B.; Oliva, P.

    2013-03-01

    The marching cube algorithm is one of the most popular algorithms for isosurface triangulation. It is based on a division of the data volume into elementary cubes, followed by a standard triangulation inside each cube. In the original formulation, the marching cube algorithm is based on 15 basic triangulations and a total of 256 elementary triangulations are obtained from the basic ones by rotation, reflection, conjugation, and combinations of these operations. The original formulation of the algorithm suffers from well-known problems of connectivity among triangles of adjacent cubes, which has been solved in various ways. We developed a variant of the marching cube algorithm that makes use of 21 basic triangulations. Triangles of adjacent cubes are always well connected in this approach. The output of the code is a triangulated model of the isosurface in raw format or in VRML (Virtual Reality Modelling Language) format. Catalogue identifier: AENS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 147558 No. of bytes in distributed program, including test data, etc.: 26084066 Distribution format: tar.gz Programming language: C. Computer: Pentium 4, CPU 3.2 GHz and 3.24 GB of RAM (2.77 GHz). Operating system: Tested on several Linux distribution, but generally works in all Linux-like platforms. RAM: Approximately 2 MB Classification: 6.5. Nature of problem: Given a scalar field μ(x,y,z) sampled on a 3D regular grid, build a discrete model of the isosurface associated to the isovalue μIso, which is defined as the set of points that satisfy the equation μ(x,y,z)=μIso. Solution method: The proposed solution is an improvement of the Marching Cube algorithm, which approximates the isosurface using a set of

  4. An automatic road segmentation algorithm using one-class SVM

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Liu, Jian; Shi, Wenzhong; Zhu, Guangxi

    2006-10-01

    Automatic feature extraction for road information plays a central role in applications related to terrains. In this paper, we propose a new road extraction method using the one-class support vector machine (SVM). For a manually segmented seed road region, only a part of pixels are really road, some pixels locating on the sideway, shadows of the building, and the cars etc., are not really road pixels. The one-class SVM is used to estimate a decision function that takes the value +1 in a small feature region capturing most of the data points in the seed road area, and -1 elsewhere. Since the road pixels in the satellite image have the similar properties, such as the spectral feature in multi-spectral image, the novelty pixel is discriminated by the estimated decision function for road segmentation. Many computation experiments are undertaken on the IKONOS high resolution image. The results demonstrate that the proposed method is effective and has much higher computation efficiency than the standard pixel-based SVM classification method.

  5. A novel breast ultrasound image segmentation algorithm based on neutrosophic similarity score and level set.

    PubMed

    Guo, Yanhui; Şengür, Abdulkadir; Tian, Jia-Wei

    2016-01-01

    Breast ultrasound (BUS) image segmentation is a challenging task due to the speckle noise, poor quality of the ultrasound images and size and location of the breast lesions. In this paper, we propose a new BUS image segmentation algorithm based on neutrosophic similarity score (NSS) and level set algorithm. At first, the input BUS image is transferred to the NS domain via three membership subsets T, I and F, and then, a similarity score NSS is defined and employed to measure the belonging degree to the true tumor region. Finally, the level set method is used to segment the tumor from the background tissue region in the NSS image. Experiments have been conducted on a variety of clinical BUS images. Several measurements are used to evaluate and compare the proposed method's performance. The experimental results demonstrate that the proposed method is able to segment the BUS images effectively and accurately. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-12-01

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.

  7. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions

    PubMed Central

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-01-01

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application. PMID:27924935

  8. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions.

    PubMed

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-12-07

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.

  9. On the Automated Segmentation of Epicardial and Mediastinal Cardiac Adipose Tissues Using Classification Algorithms.

    PubMed

    Rodrigues, Érick Oliveira; Cordeiro de Morais, Felipe Fernandes; Conci, Aura

    2015-01-01

    The quantification of fat depots on the surroundings of the heart is an accurate procedure for evaluating health risk factors correlated with several diseases. However, this type of evaluation is not widely employed in clinical practice due to the required human workload. This work proposes a novel technique for the automatic segmentation of cardiac fat pads. The technique is based on applying classification algorithms to the segmentation of cardiac CT images. Furthermore, we extensively evaluate the performance of several algorithms on this task and discuss which provided better predictive models. Experimental results have shown that the mean accuracy for the classification of epicardial and mediastinal fats has been 98.4% with a mean true positive rate of 96.2%. On average, the Dice similarity index, regarding the segmented patients and the ground truth, was equal to 96.8%. Therfore, our technique has achieved the most accurate results for the automatic segmentation of cardiac fats, to date.

  10. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    PubMed Central

    He, Baochun; Ma, Zhiyuan; Zong, Mao; Zhou, Xiangrong; Fujita, Hiroshi

    2013-01-01

    A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method. PMID:24066017

  11. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    PubMed

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  12. Nonlinear physical segmentation algorithm for determining the layer boundary from lidar signal.

    PubMed

    Mao, Feiyue; Li, Jun; Li, Chen; Gong, Wei; Min, Qilong; Wang, Wei

    2015-11-30

    Layer boundary (base and top) detection is a basic problem in lidar data processing, the results of which are used as inputs of optical properties retrieval. However, traditional algorithms not only require manual intervention but also rely heavily on the signal-to-noise ratio. Therefore, we propose a robust and automatic algorithm for layer detection based on a novel algorithm for lidar signal segmentation and representation. Our algorithm is based on the lidar equation and avoids most of the limitations of the traditional algorithms. Testing of the simulated and real signals shows that the algorithm is able to position the base and top accurately even with a low signal to noise ratio. Furthermore, the results of the classification are accurate and satisfactory. The experimental results confirm that our algorithm can be used for automatic detection, retrieval, and analysis of lidar data sets.

  13. A graph-based segmentation algorithm for tree crown extraction using airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Strîmbu, Victor F.; Strîmbu, Bogdan M.

    2015-06-01

    This work proposes a segmentation method that isolates individual tree crowns using airborne LiDAR data. The proposed approach captures the topological structure of the forest in hierarchical data structures, quantifies topological relationships of tree crown components in a weighted graph, and finally partitions the graph to separate individual tree crowns. This novel bottom-up segmentation strategy is based on several quantifiable cohesion criteria that act as a measure of belief on weather two crown components belong to the same tree. An added flexibility is provided by a set of weights that balance the contribution of each criterion, thus effectively allowing the algorithm to adjust to different forest structures. The LiDAR data used for testing was acquired in Louisiana, inside the Clear Creek Wildlife management area with a RIEGL LMS-Q680i airborne laser scanner. Three 1 ha forest areas of different conditions and increasing complexity were segmented and assessed in terms of an accuracy index (AI) accounting for both omission and commission. The three areas were segmented under optimum parameterization with an AI of 98.98%, 92.25% and 74.75% respectively, revealing the excellent potential of the algorithm. When segmentation parameters are optimized locally using plot references the AI drops to 98.23%, 89.24%, and 68.04% on average with plot sizes of 1000 m2 and 97.68%, 87.78% and 61.1% on average with plot sizes of 500 m2. More than introducing a segmentation algorithm, this paper proposes a powerful framework featuring flexibility to support a series of segmentation methods including some of those recurring in the tree segmentation literature. The segmentation method may extend its applications to any data of topological nature or data that has a topological equivalent.

  14. Fuzzy Control Hardware for Segmented Mirror Phasing Algorithm

    NASA Technical Reports Server (NTRS)

    Roth, Elizabeth

    1999-01-01

    This paper presents a possible implementation of a control model developed to phase a system of segmented mirrors, with a PAMELA configuration, using analog fuzzy hardware. Presently, the model is designed for piston control only, but with the foresight that the parameters of tip and tilt will be integrated eventually. The proposed controller uses analog circuits to exhibit a voltage-mode singleton fuzzifier, a mixed-mode inference engine, and a current-mode defuzzifier. The inference engine exhibits multiplication circuits that perform the algebraic product composition through the use of operational transconductance amplifiers rather than the typical min-max circuits. Additionally, the knowledge base, containing exemplar data gained a priori through simulation, interacts via a digital interface.

  15. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard.

    PubMed

    Jha, Abhinav K; Kupinski, Matthew A; Rodríguez, Jeffrey J; Stephen, Renu M; Stopeck, Alison T

    2012-07-07

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both the ensemble mean square error and precision. We also propose consistency checks for this evaluation technique.

  16. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    PubMed Central

    Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231

  17. An improved vein image segmentation algorithm based on SLIC and Niblack threshold method

    NASA Astrophysics Data System (ADS)

    Zhou, Muqing; Wu, Zhaoguo; Chen, Difan; Zhou, Ya

    2013-12-01

    Subcutaneous vein images are often obtained by using the absorbency difference of near-infrared (NIR) light between vein and its surrounding tissue under NIR light illumination. Vein images with high quality are critical to biometric identification, which requires segmenting the vein skeleton from the original images accurately. To address this issue, we proposed a vein image segmentation method which based on simple linear iterative clustering (SLIC) method and Niblack threshold method. The SLIC method was used to pre-segment the original images into superpixels and all the information in superpixels were transferred into a matrix (Block Matrix). Subsequently, Niblack thresholding method is adopted to binarize Block Matrix. Finally, we obtained segmented vein images from binarized Block Matrix. According to several experiments, most part of vein skeleton is revealed compared to traditional Niblack segmentation algorithm.

  18. An Unsupervised Algorithm for Segmenting Categorical Timeseries into Episodes

    DTIC Science & Technology

    2002-01-01

    encoded in in the standard GB-scheme. Franz Kafka’s The Castle in the original German comprised the final text. For comparison purposes we selected the...Orwell corpus, and 10% of the Kafka corpus, so it is not surprising that the algorithm performs worst on the Chinese corpus and best on the Kafka ...64 .34 .37 .53 .10 Chinese .57 .42 .07 .13 .57 .30 Table 2. Results of running Voting-Experts on Franz Kafka’s The Castle, Orwell’s 1984, a subset of

  19. A Pulse Coupled Neural Network Segmentation Algorithm for Reflectance Confocal Images of Epithelial Tissue

    PubMed Central

    Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131

  20. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-02-28

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  1. A pulse coupled neural network segmentation algorithm for reflectance confocal images of epithelial tissue.

    PubMed

    Harris, Meagan A; Van, Andrew N; Malik, Bilal H; Jabbour, Joey M; Maitland, Kristen C

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard.

  2. Adaptive segment protection algorithm of multicast on WDM networks against single link failure

    NASA Astrophysics Data System (ADS)

    Lu, Cai; Nie, Xiaoyan; Wang, Sheng; Li, Lemin

    2005-11-01

    This paper investigates the problem of protecting multicast sessions in mesh WDM (wavelength-division multiplexing) networks against single link failures, e.g., a fiber cut in optical networks. Firstly, we study the two characters of multicast sessions in mesh WDM networks with sparse light splitters configuration. Traditionally, a multicast tree does not contain any circles. The first character is that the multicast tree has better performances if it contains some circles. What is more, a multicast tree has several branches. If we add a path between the leaves nodes on different branches, the segment between them on multicast tree is protected. Based the two characters, the survivable multicast sessions routing problem is formulated into an Integer Linear Programming (ILP). Then a heuristic algorithm, named adaptive shared segment protection (ASSP) algorithm, is proposed for multicast session. ASSP algorithm does not identify the segment for multicast tree previously. The segments are determined during the process of algorithm according to the multicast tree and the sparse networks resource. Comparisons are made between ASSP and other two reported schemes link disjoint trees (LDT) and shared disjoint paths (SDP) in terms of blocking probability and resource cost on USNET topology. Simulations show that ASSP algorithm has better performances than other existing schemes.

  3. A modified fuzzy C-means algorithm for bias field estimation and segmentation of MRI data.

    PubMed

    Ahmed, Mohamed N; Yamany, Sameh M; Mohamed, Nevin; Farag, Aly A; Moriarty, Thomas

    2002-03-01

    In this paper, we present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.

  4. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  5. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge.

    PubMed

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip Eddie; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-02-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi

  6. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge

    PubMed Central

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-01-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like

  7. Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images

    NASA Astrophysics Data System (ADS)

    LeAnder, Robert; Chowdary, Myneni Sushma; Mokkapati, Swapnasri; Umbaugh, Scott E.

    2008-03-01

    Effective timing and treatment are critical to saving the sight of patients with diabetes. Lack of screening, as well as a shortage of ophthalmologists, help contribute to approximately 8,000 cases per year of people who lose their sight to diabetic retinopathy, the leading cause of new cases of blindness [1] [2]. Timely treatment for diabetic retinopathy prevents severe vision loss in over 50% of eyes tested [1]. Fundus images can provide information for detecting and monitoring eye-related diseases, like diabetic retinopathy, which if detected early, may help prevent vision loss. Damaged blood vessels can indicate the presence of diabetic retinopathy [9]. So, early detection of damaged vessels in retinal images can provide valuable information about the presence of disease, thereby helping to prevent vision loss. Purpose: The purpose of this study was to compare the effectiveness of two blood vessel segmentation algorithms. Methods: Fifteen fundus images from the STARE database were used to develop two algorithms using the CVIPtools software environment. Another set of fifteen images were derived from the first fifteen and contained ophthalmologists' hand-drawn tracings over the retinal vessels. The ophthalmologists' tracings were used as the "gold standard" for perfect segmentation and compared with the segmented images that were output by the two algorithms. Comparisons between the segmented and the hand-drawn images were made using Pratt's Figure of Merit (FOM), Signal-to-Noise Ratio (SNR) and Root Mean Square (RMS) Error. Results: Algorithm 2 has an FOM that is 10% higher than Algorithm 1. Algorithm 2 has a 6%-higher SNR than Algorithm 1. Algorithm 2 has only 1.3% more RMS error than Algorithm 1. Conclusions: Algorithm 1 extracted most of the blood vessels with some missing intersections and bifurcations. Algorithm 2 extracted all the major blood vessels, but eradicated some vessels as well. Algorithm 2 outperformed Algorithm 1 in terms of visual clarity, FOM

  8. Generalized rough fuzzy c-means algorithm for brain MR image segmentation.

    PubMed

    Ji, Zexuan; Sun, Quansen; Xia, Yong; Chen, Qiang; Xia, Deshen; Feng, Dagan

    2012-11-01

    Fuzzy sets and rough sets have been widely used in many clustering algorithms for medical image segmentation, and have recently been combined together to better deal with the uncertainty implied in observed image data. Despite of their wide spread applications, traditional hybrid approaches are sensitive to the empirical weighting parameters and random initialization, and hence may produce less accurate results. In this paper, a novel hybrid clustering approach, namely the generalized rough fuzzy c-means (GRFCM) algorithm is proposed for brain MR image segmentation. In this algorithm, each cluster is characterized by three automatically determined rough-fuzzy regions, and accordingly the membership of each pixel is estimated with respect to the region it locates. The importance of each region is balanced by a weighting parameter, and the bias field in MR images is modeled by a linear combination of orthogonal polynomials. The weighting parameter estimation and bias field correction have been incorporated into the iterative clustering process. Our algorithm has been compared to the existing rough c-means and hybrid clustering algorithms in both synthetic and clinical brain MR images. Experimental results demonstrate that the proposed algorithm is more robust to the initialization, noise, and bias field, and can produce more accurate and reliable segmentations.

  9. Shack-Hartmann mask/pupil registration algorithm for wavefront sensing in segmented mirror telescopes.

    PubMed

    Piatrou, Piotr; Chanan, Gary

    2013-11-10

    Shack-Hartmann wavefront sensing in general requires careful registration of the reimaged telescope primary mirror to the Shack-Hartmann mask or lenslet array. The registration requirements are particularly demanding for applications in which segmented mirrors are phased using a physical optics generalization of the Shack-Hartmann test. In such cases the registration tolerances are less than 0.1% of the diameter of the primary mirror. We present a pupil registration algorithm suitable for such high accuracy applications that is based on the one used successfully for phasing the segments of the Keck telescopes. The pupil is aligned in four degrees of freedom (translations, rotation, and magnification) by balancing the intensities of subimages formed by small subapertures that straddle the periphery of the mirror. We describe the algorithm in general terms and then in the specific context of two very different geometries: the 492 segment Thirty Meter Telescope, and the seven "segment" Giant Magellan Telescope. Through detailed simulations we explore the accuracy of the algorithm and its sensitivity to such effects as cross talk, noise/counting statistics, atmospheric scintillation, and segment reflectivity variations.

  10. An evolutionary algorithm for the segmentation of muscles and bones of the lower limb.

    NASA Astrophysics Data System (ADS)

    Lpez, Marco A.; Braidot, A.; Sattler, Anbal; Schira, Claudia; Uriburu, E.

    2016-04-01

    In the field of medical image segmentation, muscles segmentation is a problem that has not been fully resolved yet. This is due to the fact that the basic assumption of image segmentation, which asserts that a visual distinction should ex- ist between the different structures to be identified, is infringed. As the tissue composition of two different muscles is the same, it becomes extremely difficult to distinguish one another if they are near. We have developed an evolutionary algorithm which selects the set and the sequence of morphological operators that better segments muscles and bones from an MRI image. The achieved results shows that the developed algorithm presents average sensitivity values close to 75% in the segmentation of the different processed muscles and bones. It also presents average specificity values close to 93% for the same structures. Furthermore, the algorithm can identify muscles that are closely located through the path from their origin point to their insertions, with very low error values (below 7%) .

  11. Individual tooth region segmentation using modified watershed algorithm with morphological characteristic.

    PubMed

    Na, Sung Dae; Lee, Gihyoun; Lee, Jyung Hyun; Kim, Myoung Nam

    2014-01-01

    In this paper, a new method for individual tooth segmentation was proposed. The proposed method is composed of enhancement and extraction of boundary and seed of watershed algorithm using trisection areas by morphological characteristic of teeth. The watershed algorithm is one of the conventional methods for tooth segmentation; however, the method has some problems. First, molar region detection ratio is reduced because of oral structure features that is low intensities in molar region. Second, inaccurate segmentation occurs in incisor region owing to specular reflection. To solve the problems, the trisection method using morphological characteristic was proposed, where three tooth areas are made using ratio of entire tooth to each tooth. Moreover, the enhancement is to improve the intensity of molar using the proposed method. In addition, boundary and seed of watershed are extracted using trisection areas applied other parameters each area. Finally, individual tooth segmentation was performed using extracted boundary and seed. Furthermore, the proposed method was compared with conventional methods to confirm its efficiency. As a result, the proposed method was demonstrated to have higher detection ratio, better over segmentation, and overlap segmentation than conventional methods.

  12. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  13. Surgical wound segmentation based on adaptive threshold edge detection and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shih, Hsueh-Fu; Ho, Te-Wei; Hsu, Jui-Tse; Chang, Chun-Che; Lai, Feipei; Wu, Jin-Ming

    2017-02-01

    Postsurgical wound care has a great impact on patients' prognosis. It often takes few days, even few weeks, for the wound to stabilize, which incurs a great cost of health care and nursing resources. To assess the wound condition and diagnosis, it is important to segment out the wound region for further analysis. However, the scenario of this strategy often consists of complicated background and noise. In this study, we propose a wound segmentation algorithm based on Canny edge detector and genetic algorithm with an unsupervised evaluation function. The results were evaluated by the 112 clinical images, and 94.3% of images were correctly segmented. The judgment was based on the evaluation of experimented medical doctors. This capability to extract complete wound regions, makes it possible to conduct further image analysis such as intelligent recovery evaluation and automatic infection requirements.

  14. [Study of color blood image segmentation based on two-stage-improved FCM algorithm].

    PubMed

    Wang, Bin; Chen, Huaiqing; Huang, Hua; Rao, Jie

    2006-04-01

    This paper introduces a new method for color blood cell image segmentation based on FCM algorithm. By transforming the original blood microscopic image to indexed image, and by doing the colormap, a fuzzy apparoach to obviating the direct clustering of image pixel values, the quantity of data processing and analysis is enormously compressed. In accordance to the inherent features of color blood cell image, the segmentation process is divided into two stages. (1)confirming the number of clusters and initial cluster centers; (2) altering the distance measuring method by the distance weighting matrix in order to improve the clustering veracity. In this way, the problem of difficult convergence of FCM algorithm is solved, the iteration time of iterative convergence is reduced, the execution time of algarithm is decreased, and the correct segmentation of the components of color blood cell image is implemented.

  15. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    PubMed Central

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-01-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation). PMID:28181546

  16. A martian case study of segmenting images automatically for granulometry and sedimentology, Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Karunatillake, Suniti; McLennan, Scott M.; Herkenhoff, Kenneth E.; Husch, Jonathan M.; Hardgrove, Craig; Skok, J. R.

    2014-02-01

    In planetary exploration, delineating individual grains in images via segmentation is a key path to sedimentological comparisons with the extensive terrestrial literature. Samples that contain a substantial fine grain component, common at Meridiani and Gusev at Mars, would involve prohibitive effort if attempted manually. Unavailability of physical samples also precludes standard terrestrial methods such as sieving. Furthermore, planetary scientists have been thwarted by the dearth of segmentation algorithms customized for planetary applications, including Mars, and often rely on sub-optimal solutions adapted from medical software. We address this with an original algorithm optimized to segment whole images from the Microscopic Imager of the Mars Exploration Rovers. While our code operates with minimal human guidance, its default parameters can be modified easily for different geologic settings and imagers on Earth and other planets, such as the Curiosity Rover’s Mars Hand Lens Instrument. We assess the algorithm’s robustness in a companion work.

  17. Analysis of the Command and Control Segment (CCS) attitude estimation algorithm

    NASA Technical Reports Server (NTRS)

    Stockwell, Catherine

    1993-01-01

    This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.

  18. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  19. A joint shape evolution approach to medical image segmentation using expectation-maximization algorithm.

    PubMed

    Farzinfar, Mahshid; Teoh, Eam Khwang; Xue, Zhong

    2011-11-01

    This study proposes an expectation-maximization (EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shape-based statistical model but also by a hidden variable model from image observation. The hidden variable model herein is defined by the local voxel labeling, which is unknown and estimated by the expected likelihood function derived from the image data and prior anatomical knowledge. In the M-step, the shapes of the structures are estimated jointly by encoding the hidden variable model and the statistical prior model obtained from the training stage. In the E-step, the expected observation likelihood and the prior distribution of the hidden variables are estimated. In experiments, the proposed automatic segmentation algorithm is applied to multiple gray nuclei structures such as caudate, putamens and thalamus of three-dimensional magnetic resonance imaging in volunteers and patients. As for the robustness and accuracy of the segmentation algorithm, the results of the proposed EM-joint shape-based algorithm outperformed those obtained using the statistical shape model-based techniques in the same framework and a current state-of-the-art region competition level set method.

  20. A unifying graph-cut image segmentation framework: algorithms it encompasses and equivalences among them

    NASA Astrophysics Data System (ADS)

    Ciesielski, Krzysztof Chris; Udupa, Jayaram K.; Falcão, A. X.; Miranda, P. A. V.

    2012-02-01

    We present a general graph-cut segmentation framework GGC, in which the delineated objects returned by the algorithms optimize the energy functions associated with the lp norm, 1 <= p <= ∞. Two classes of well known algorithms belong to GGC: the standard graph cut GC (such as the min-cut/max-flow algorithm) and the relative fuzzy connectedness algorithms RFC (including iterative RFC, IRFC). The norm-based description of GGC provides more elegant and mathematically better recognized framework of our earlier results from [18, 19]. Moreover, it allows precise theoretical comparison of GGC representable algorithms with the algorithms discussed in a recent paper [22] (min-cut/max-flow graph cut, random walker, shortest path/geodesic, Voronoi diagram, power watershed/shortest path forest), which optimize, via lp norms, the intermediate segmentation step, the labeling of scene voxels, but for which the final object need not optimize the used lp energy function. Actually, the comparison of the GGC representable algorithms with that encompassed in the framework described in [22] constitutes the main contribution of this work.

  1. GPU-based acceleration of an automatic white matter segmentation algorithm using CUDA.

    PubMed

    Labra, Nicole; Figueroa, Miguel; Guevara, Pamela; Duclap, Delphine; Hoeunou, Josselin; Poupon, Cyril; Mangin, Jean-Francois

    2013-01-01

    This paper presents a parallel implementation of an algorithm for automatic segmentation of white matter fibers from tractography data. We execute the algorithm in parallel using a high-end video card with a Graphics Processing Unit (GPU) as a computation accelerator, using the CUDA language. By exploiting the parallelism and the properties of the memory hierarchy available on the GPU, we obtain a speedup in execution time of 33.6 with respect to an optimized sequential version of the algorithm written in C, and of 240 with respect to the original Python/C++ implementation. The execution time is reduced from more than two hours to only 35 seconds for a subject dataset of 800,000 fibers, thus enabling applications that use interactive segmentation and visualization of small to medium-sized tractography datasets.

  2. Algorithm for the identification of malfunctioning sensors in the control systems of segmented mirror telescopes.

    PubMed

    Chanan, Gary; Nelson, Jerry

    2009-11-10

    The active control systems of segmented mirror telescopes are vulnerable to a malfunction of a few (or even one) of their segment edge sensors, the effects of which can propagate through the entire system and seriously compromise the overall telescope image quality. Since there are thousands of such sensors in the extremely large telescopes now under development, it is essential to develop fast and efficient algorithms that can identify bad sensors so that they can be removed from the control loop. Such algorithms are nontrivial; for example, a simple residual-to-the-fit test will often fail to identify a bad sensor. We propose an algorithm that can reliably identify a single bad sensor and we extend it to the more difficult case of multiple bad sensors. Somewhat surprisingly, the identification of a fixed number of bad sensors does not necessarily become more difficult as the telescope becomes larger and the number of sensors in the control system increases.

  3. Malleable Fuzzy Local Median C Means Algorithm for Effective Biomedical Image Segmentation

    NASA Astrophysics Data System (ADS)

    Rajendran, Arunkumar; Balakrishnan, Nagaraj; Varatharaj, Mithya

    2016-12-01

    The traditional way of clustering plays an effective role in the field of segmentation which was developed to be more effective and also in the recent development the extraction of contextual information can be processed with ease. This paper presents a modified Fuzzy C-Means (FCM) algorithm that provides the better segmentation in the contour grayscale regions of the biomedical images where effective cluster is needed. Malleable Fuzzy Local Median C-Means (M-FLMCM) is the proposed algorithm, proposed to overcome the disadvantage of the traditional FCM method in which the convergence time requirement is more, lack of ability to remove the noise, and the inability to cluster the contour region such as images. M-FLMCM shows promising results in the experiment with real-world biomedical images. The experiment results, with 96 % accuracy compared to the other algorithms.

  4. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    PubMed Central

    Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  5. Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud

    2012-11-01

    We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.

  6. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    PubMed

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  7. BpMatch: an efficient algorithm for a segmental analysis of genomic sequences.

    PubMed

    Felicioli, Claudio; Marangoni, Roberto

    2012-01-01

    Here, we propose BpMatch: an algorithm that, working on a suitably modified suffix-tree data structure, is able to compute, in a fast and efficient way, the coverage of a source sequence S on a target sequence T, by taking into account direct and reverse segments, eventually overlapped. Using BpMatch, the operator should define a priori, the minimum length l of a segment and the minimum number of occurrences minRep, so that only segments longer than l and having a number of occurrences greater than minRep are considered to be significant. BpMatch outputs the significant segments found and the computed segment-based distance. On the worst case, assuming the alphabet dimension d is a constant, the time required by BpMatch to calculate the coverage is O(l²n). On the average, by setting l ≥ 2 log(d)(n), the time required to calculate the coverage is only O(n). BpMatch, thanks to the minRep parameter, can also be used to perform a self-covering: to cover a sequence using segments coming from itself, by avoiding the trivial solution of having a single segment coincident with the whole sequence. The result of the self-covering approach is a spectral representation of the repeats contained in the sequence. BpMatch is freely available on: www.sourceforge.net/projects/bpmatch.

  8. The EM/MPM algorithm for segmentation of textured images: analysis and further experimental results.

    PubMed

    Comer, M L; Delp, E J

    2000-01-01

    In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.

  9. Analyzing the medical image by using clustering algorithms through segmentation process

    NASA Astrophysics Data System (ADS)

    Kumar, Papendra; Kumar, Suresh

    2011-12-01

    Basic aim of our study is to analyze the medical image. In computer vision, segmentationRefers to the process of partitioning a digital image into multiple regions. The goal ofSegmentation is to simplify and/or change the representation of an image into something thatIs more meaningful and easier to analyze. Image segmentation is typically used to locateObjects and boundaries (lines, curves, etc.) in images.There is a lot of scope of the analysis that we have done in our project; our analysis couldBe used for the purpose of monitoring the medical image. Medical imaging refers to theTechniques and processes used to create images of the human body (or parts thereof) forClinical purposes (medical procedures seeking to reveal, diagnose or examine disease) orMedical science (including the study of normal anatomy and function).As a discipline and in its widest sense, it is part of biological imaging and incorporatesRadiology (in the wider sense), radiological sciences, endoscopy, (medical) thermography, Medical photography and microscopy (e.g. for human pathological investigations).Measurement and recording techniques which are not primarily designed to produce images.

  10. An improved segmentation algorithm to detect moving object in video sequences

    NASA Astrophysics Data System (ADS)

    Li, Jinkui; Sang, Xinzhu; Wang, Yongqiang; Yan, Binbin; Yu, Chongxiu

    2010-11-01

    The segmentation of moving object in video sequences is attracting more and more attention because of its important role in various camera video applications, such as video surveillance, traffic monitoring, people tracking. and so on. Conventional segmentation algorithms can be divided into two classes. One class is based on spatial homogeneity, which results in the promising output. However, the computation is too complex and heavy to be unsuitable to real-time applications. The other class utilizes change detection as the segmentation standard to extract the moving object. Typical approaches include frame difference, background subtraction and optical flow. A novel algorithm based on adaptive symmetrical difference and background subtraction is proposed. Firstly, the moving object mask is detected through the adaptive symmetrical difference, and the contour of the mask is extracted. And then, the adaptive background subtraction is carried out in the acquired region to extract the accurate moving object. Morphological operation and shadow cancellation are adopted to refine the result. Experimental results show that the algorithm is robust and effective in improving the segmentation accuracy.

  11. Insight into 3D micro-CT data: exploring segmentation algorithms through performance metrics.

    PubMed

    Perciano, Talita; Ushizima, Daniela; Krishnan, Harinarayan; Parkinson, Dilworth; Larson, Natalie; Pelt, Daniël M; Bethel, Wes; Zok, Frank; Sethian, James

    2017-09-01

    Three-dimensional (3D) micro-tomography (µ-CT) has proven to be an important imaging modality in industry and scientific domains. Understanding the properties of material structure and behavior has produced many scientific advances. An important component of the 3D µ-CT pipeline is image partitioning (or image segmentation), a step that is used to separate various phases or components in an image. Image partitioning schemes require specific rules for different scientific fields, but a common strategy consists of devising metrics to quantify performance and accuracy. The present article proposes a set of protocols to systematically analyze and compare the results of unsupervised classification methods used for segmentation of synchrotron-based data. The proposed dataflow for Materials Segmentation and Metrics (MSM) provides 3D micro-tomography image segmentation algorithms, such as statistical region merging (SRM), k-means algorithm and parallel Markov random field (PMRF), while offering different metrics to evaluate segmentation quality, confidence and conformity with standards. Both experimental and synthetic data are assessed, illustrating quantitative results through the MSM dashboard, which can return sample information such as media porosity and permeability. The main contributions of this work are: (i) to deliver tools to improve material design and quality control; (ii) to provide datasets for benchmarking and reproducibility; (iii) to yield good practices in the absence of standards or ground-truth for ceramic composite analysis.

  12. Feature measures for the segmentation of neuronal membrane using a machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Iftikhar, Saadia; Godil, Afzal

    2013-12-01

    In this paper, we present a Support Vector Machine (SVM) based pixel classifier for a semi-automated segmentation algorithm to detect neuronal membrane structures in stacks of electron microscopy images of brain tissue samples. This algorithm uses high-dimensional feature spaces extracted from center-surrounded patches, and some distinct edge sensitive features for each pixel in the image, and a training dataset for the segmentation of neuronal membrane structures and background. Some threshold conditions are later applied to remove small regions, which are below a certain threshold criteria, and morphological operations, such as the filling of the detected objects, are done to get compactness in the objects. The performance of the segmentation method is calculated on the unseen data by using three distinct error measures: pixel error, wrapping error, and rand error, and also a pixel by pixel accuracy measure with their respective ground-truth. The trained SVM classifier achieves the best precision level in these three distinct errors at 0.23, 0.016 and 0.15, respectively; while the best accuracy using pixel by pixel measure reaches 77% on the given dataset. The results presented here are one step further towards exploring possible ways to solve these hard problems, such as segmentation in medical image analysis. In the future, we plan to extend it as a 3D segmentation approach for 3D datasets to not only retain the topological structures in the dataset but also for the ease of further analysis.

  13. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  14. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

  15. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

  16. Comparison of different automatic threshold algorithms for image segmentation in microscope images

    NASA Astrophysics Data System (ADS)

    Boecker, Wilfried; Muller, W.-U.; Streffer, Christian

    1995-08-01

    Image segmentation is almost always a necessary step in image processing. The employed threshold algorithms are based on the detection of local minima in the gray level histograms of the entire image. In automatic cell recognition equipment, like chromosome analysis or micronuclei counting systems, flexible and adaptive thresholds are required to consider variation in gray level intensities of the background and of the specimen. We have studied three different methods of threshold determination: 1) a statistical procedure, which uses the interclass entropy maximization of the gray level histogram. The iterative algorithm can be used for multithreshold segmentation. The contribution of iteration step 'i' is 2+i-1) number of thresholds; 2) a numerical approach, which detects local minima in the gray level histogram. The algorithm must be tailored and optimized for specific applications like cell recognition with two different thresholds for cell nuclei and cell cytoplasm segmentation; 3) an artificial neural network, which is trained with learning sets of image histograms and the corresponding interactively determined thresholds. We have investigated feed forward networks with one and two layers, respectively. The gray level frequencies are used as inputs for the net. The number of different thresholds per image determines the output channels. We have tested and compared these different threshold algorithms for practical use in fluorescence microscopy as well as in bright field microscopy. The implementation and the results are presented and discussed.

  17. Enhancement dark channel algorithm of color fog image based on the local segmentation

    NASA Astrophysics Data System (ADS)

    Yun, Lijun; Gao, Yin; Shi, Jun-sheng; Xu, Ling-zhang

    2015-04-01

    The classical dark channel theory algorithm has yielded good results in the processing of single fog image, but in some larger contrast regions, it appears image hue, brightness and saturation distortion problems to a certain degree, and also produces halo phenomenon. In the view of the above situation, through a lot of experiments, this paper has found some factors causing the halo phenomenon. The enhancement dark channel algorithm of color fog image based on the local segmentation is proposed. On the basis of the dark channel theory, first of all, the classic dark channel theory of mathematical model is modified, which is mainly to correct the brightness and saturation of image. Then, according to the local adaptive segmentation theory, it process the block of image, and overlap the local image. On the basis of the statistical rules, it obtains each pixel value from the segmentation processing, so as to obtain the local image. At last, using the dark channel theory, it achieves the enhanced fog image. Through the subjective observation and objective evaluation, the algorithm is better than the classic dark channel algorithm in the overall and details.

  18. Segmentation of retinal blood vessels using a novel clustering algorithm (RACAL) with a partial supervision strategy.

    PubMed

    Salem, Sameh A; Salem, Nancy M; Nandi, Asoke K

    2007-03-01

    In this paper, segmentation of blood vessels from colour retinal images using a novel clustering algorithm with a partial supervision strategy is proposed. The proposed clustering algorithm, which is a RAdius based Clustering ALgorithm (RACAL), uses a distance based principle to map the distributions of the data by utilising the premise that clusters are determined by a distance parameter, without having to specify the number of clusters. Additionally, the proposed clustering algorithm is enhanced with a partial supervision strategy and it is demonstrated that it is able to segment blood vessels of small diameters and low contrasts. Results are compared with those from the KNN classifier and show that the proposed RACAL performs better than the KNN in case of abnormal images as it succeeds in segmenting small and low contrast blood vessels, while it achieves comparable results for normal images. For automation process, RACAL can be used as a classifier and results show that it performs better than the KNN classifier in both normal and abnormal images.

  19. Novel real-time volumetric tool segmentation algorithm for intraoperative microscope integrated OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Viehland, Christian; Keller, Brenton; Carrasco-Zevallos, Oscar; Cunefare, David; Shen, Liangbo; Toth, Cynthia; Farsiu, Sina; Izatt, Joseph A.

    2016-03-01

    Optical coherence tomography (OCT) allows for micron scale imaging of the human retina and cornea. Current generation research and commercial intrasurgical OCT prototypes are limited to live B-scan imaging. Our group has developed an intraoperative microscope integrated OCT system capable of live 4D imaging. With a heads up display (HUD) 4D imaging allows for dynamic intrasurgical visualization of tool tissue interaction and surgical maneuvers. Currently our system relies on operator based manual tracking to correct for patient motion and motion caused by the surgeon, to track the surgical tool, and to display the correct B-scan to display on the HUD. Even when tracking only bulk motion, the operator sometimes lags behind and the surgical region of interest can drift out of the OCT field of view. To facilitate imaging we report on the development of a fast volume based tool segmentation algorithm. The algorithm is based on a previously reported volume rendering algorithm and can identify both the tool and retinal surface. The algorithm requires 45 ms per volume for segmentation and can be used to actively place the B-scan across the tool tissue interface. Alternatively, real-time tool segmentation can be used to allow the surgeon to use the surgical tool as an interactive B-scan pointer.

  20. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  1. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  2. Multispectral image segmentation using parallel mean shift algorithm and CUDA technology

    NASA Astrophysics Data System (ADS)

    Zghidi, Hafedh; Walczak, Maksym; Świtoński, Adam

    2016-06-01

    We present a parallel mean shift algorithm running on CUDA and its possible application in segmentation of multispectral images. The aim of this paper is to present a method of analyzing highly noised multispectral images of various objects, so that important features are enhanced and easier to identify. The algorithm finds applications in analysis of multispectral images of eyes so that certain features visible only in specific wavelengths are made clearly visible despite high level of noise, for which processing time is very long.

  3. A Segmentation Algorithm for X-ray 3D Angiography and Vessel Catheterization

    SciTech Connect

    Franchi, Danilo; Rosa, Luigi; Placidi, Giuseppe

    2008-11-06

    Vessel Catheterization is a clinical procedure usually performed by a specialist by means of X-ray fluoroscopic guide with contrast-media. In the present paper, we present a simple and efficient algorithm for vessel segmentation which allows vessel separation and extraction from the background (noise and signal coming from other organs). This would reduce the number of projections (X-ray scans) to reconstruct a complete and accurate 3D vascular model and the radiological risk, in particular for the patient. In what follows, the algorithm is described and some preliminary experimental results are reported illustrating the behaviour of the proposed method.

  4. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  5. [Automatic segmentation of clustered breast cancer cells based on modified watershed algorithm and concavity points searching].

    PubMed

    Tong, Zhen; Pu, Lixin; Dong, Fangjie

    2013-08-01

    As a common malignant tumor, breast cancer has seriously affected women's physical and psychological health even threatened their lives. Breast cancer has even begun to show a gradual trend of high incidence in some places in the world. As a kind of common pathological assist diagnosis technique, immunohistochemical technique plays an important role in the diagnosis of breast cancer. Usually, Pathologists isolate positive cells from the stained specimen which were processed by immunohistochemical technique and calculate the ratio of positive cells which is a core indicator of breast cancer in diagnosis. In this paper, we present a new algorithm which was based on modified watershed algorithm and concavity points searching to identify the positive cells and segment the clustered cells automatically, and then realize automatic counting. By comparison of the results of our experiments with those of other methods, our method can exactly segment the clustered cells without losing any geometrical cell features and give the exact number of separating cells.

  6. Minimum mutual information based level set clustering algorithm for fast MRI tissue segmentation.

    PubMed

    Dai, Shuanglu; Man, Hong; Zhan, Shu

    2015-01-01

    Accurate and accelerated MRI tissue recognition is a crucial preprocessing for real-time 3d tissue modeling and medical diagnosis. This paper proposed an information de-correlated clustering algorithm implemented by variational level set method for fast tissue segmentation. The key idea is to design a local correlation term between original image and piecewise constant into the variational framework. The minimized correlation will then lead to de-correlated piecewise regions. Firstly, by introducing a continuous bounded variational domain describing the image, a probabilistic image restoration model is assumed to modify the distortion. Secondly, regional mutual information is introduced to measure the correlation between piecewise regions and original images. As a de-correlated description of the image, piecewise constants are finally solved by numerical approximation and level set evolution. The converged piecewise constants automatically clusters image domain into discriminative regions. The segmentation results show that our algorithm performs well in terms of time consuming, accuracy, convergence and clustering capability.

  7. Research on algorithm about content-based segmentation and spatial transformation for stereo panorama

    NASA Astrophysics Data System (ADS)

    Li, Zili; Xia, Xuezhi; Zhu, Guangxi; Zhu, Yaoting

    2004-03-01

    The principle to construct G&IBMR virtual scene based on stereo panorama with binocular stereovision was put forward. Closed cubic B-splines have been used for content-based segmentation to virtual objects of stereo panorama and all objects in current viewing frustum would be ordered in current object linked list (COLL) by their depth information. The formula has been educed to calculate the depth information of a point in virtual scene by the parallax based on a parallel binocular vision model. A bilinear interpolation algorithm has been submitted to deform the segmentation template and take image splicing between three key positions. We also use the positional and directional transformation of binocular virtual camera bound to user avatar to drive the transformation of stereo panorama so as to achieve real-time consistency about perspective relationship and image masking. The experimental result has shown that the algorithm in this paper is effective and feasible.

  8. An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.

    PubMed

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2014-09-01

    In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation.

  9. A Rate-Distortion-Based Merging Algorithm for Compressed Image Segmentation

    PubMed Central

    Juang, Ying-Shen; Hsin, Hsi-Chin; Sung, Tze-Yun; Shieh, Yaw-Shih; Cattani, Carlo

    2012-01-01

    Original images are often compressed for the communication applications. In order to avoid the burden of decompressing computations, it is thus desirable to segment images in the compressed domain directly. This paper presents a simple rate-distortion-based scheme to segment images in the JPEG2000 domain. It is based on a binary arithmetic code table used in the JPEG2000 standard, which is available at both encoder and decoder; thus, there is no need to transmit the segmentation result. Experimental results on the Berkeley image database show that the proposed algorithm is preferable in terms of the running time and the quantitative measures: probabilistic Rand index (PRI) and boundary displacement error (BDE). PMID:23118800

  10. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  11. Application of Micro-segmentation Algorithms to the Healthcare Market:A Case Study

    SciTech Connect

    Sukumar, Sreenivas R; Aline, Frank

    2013-01-01

    We draw inspiration from the recent success of loyalty programs and targeted personalized market campaigns of retail companies such as Kroger, Netflix, etc. to understand beneficiary behaviors in the healthcare system. Our posit is that we can emulate the financial success the companies have achieved by better understanding and predicting customer behaviors and translating such success to healthcare operations. Towards that goal, we survey current practices in market micro-segmentation research and analyze health insurance claims data using those algorithms. We present results and insights from micro-segmentation of the beneficiaries using different techniques and discuss how the interpretation can assist with matching the cost-effective insurance payment models to the beneficiary micro-segments.

  12. US-Cut: interactive algorithm for rapid detection and segmentation of liver tumors in ultrasound acquisitions

    NASA Astrophysics Data System (ADS)

    Egger, Jan; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Chen, Xiaojun; Zoller, Wolfram G.; Schmalstieg, Dieter; Hann, Alexander

    2016-04-01

    Ultrasound (US) is the most commonly used liver imaging modality worldwide. It plays an important role in follow-up of cancer patients with liver metastases. We present an interactive segmentation approach for liver tumors in US acquisitions. Due to the low image quality and the low contrast between the tumors and the surrounding tissue in US images, the segmentation is very challenging. Thus, the clinical practice still relies on manual measurement and outlining of the tumors in the US images. We target this problem by applying an interactive segmentation algorithm to the US data, allowing the user to get real-time feedback of the segmentation results. The algorithm has been developed and tested hand-in-hand by physicians and computer scientists to make sure a future practical usage in a clinical setting is feasible. To cover typical acquisitions from the clinical routine, the approach has been evaluated with dozens of datasets where the tumors are hyperechoic (brighter), hypoechoic (darker) or isoechoic (similar) in comparison to the surrounding liver tissue. Due to the interactive real-time behavior of the approach, it was possible even in difficult cases to find satisfying segmentations of the tumors within seconds and without parameter settings, and the average tumor deviation was only 1.4mm compared with manual measurements. However, the long term goal is to ease the volumetric acquisition of liver tumors in order to evaluate for treatment response. Additional aim is the registration of intraoperative US images via the interactive segmentations to the patient's pre-interventional CT acquisitions.

  13. Ancient architecture point cloud data segmentation based on modified fuzzy C-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghong; Li, Deren; Wang, Yanmin

    2008-12-01

    Segmentation of Point cloud data is a key but difficult problem for architecture 3D reconstruction. Because compared to reverse engineering, there are more noise in ancient architecture point cloud data of edge because of mirror reflection and the traditional methods are hard that is not fuzzy in the preceding part of this paper, these methods can't embody the case of the points of borderline belonging two regions and it is difficult to satisfy demands of segmentation of ancient architecture point cloud data. Ancient architecture is mostly composed of columniation, plinth, arch, girder and tile on specifically order. Each of the component's surfaces is regular and smooth and belongingness of borderline points is very blurry. According to the character the author proposed a modified Fuzzy C-means clustering (MFCM) algorithm, which is used to add geometrical information during clustering. In addition this method improves belongingness constraints to avoid influence of noise on the result of segmentation. The algorithm is used in the project "Digital surveying of ancient architecture--- Forbidden City". Experiments show that the method is a good anti-noise, accuracy and adaptability and greater degree of human intervention is reduced. After segmentation internal point and point edge can be districted according membership of every point, so as to facilitate the follow-up to the surface feature extraction and model identification, and effective support for the three-dimensional model of the reconstruction of ancient buildings is provided.

  14. Alignment, segmentation and 3-D reconstruction of serial sections based on automated algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Weiguo; Tang, Shaojie; Xu, Qiong; Lian, Qin; Wang, Jin; Li, Dichen

    2012-12-01

    A well-defined three-dimensional (3-D) reconstruction of bone-cartilage transitional structures is crucial for the osteochondral restoration. This paper presents an accurate, computationally efficient and fully-automated algorithm for the alignment and segmentation of two-dimensional (2-D) serial to construct the 3-D model of bone-cartilage transitional structures. Entire system includes the following five components: (1) image harvest, (2) image registration, (3) image segmentation, (4) 3-D reconstruction and visualization, and (5) evaluation. A computer program was developed in the environment of Matlab for the automatic alignment and segmentation of serial sections. Automatic alignment algorithm based on the position's cross-correlation of the anatomical characteristic feature points of two sequential sections. A method combining an automatic segmentation and an image threshold processing was applied to capture the regions and structures of interest. SEM micrograph and 3-D model reconstructed directly in digital microscope were used to evaluate the reliability and accuracy of this strategy. The morphology of 3-D model constructed by serial sections is consistent with the results of SEM micrograph and 3-D model of digital microscope.

  15. Therapy Operating Characteristic (TOC) Curves and their Application to the Evaluation of Segmentation Algorithms.

    PubMed

    Barrett, Harrison H; Wilson, Donald W; Kupinski, Matthew A; Aguwa, Kasarachi; Ewell, Lars; Hunter, Robert; Müller, Stefan

    2010-01-01

    This paper presents a general framework for assessing imaging systems and image-analysis methods on the basis of therapeutic rather than diagnostic efficacy. By analogy to receiver operating characteristic (ROC) curves, it utilizes the Therapy Operating Characteristic or TOC curve, which is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall level of a radiotherapy treatment beam is varied. The proposed figure of merit is the area under the TOC, denoted AUTOC. If the treatment planning algorithm is held constant, AUTOC is a metric for the imaging and image-analysis components, and in particular for segmentation algorithms that are used to delineate tumors and normal tissues. On the other hand, for a given set of segmented images, AUTOC can also be used as a metric for the treatment plan itself. A general mathematical theory of TOC and AUTOC is presented and then specialized to segmentation problems. Practical approaches to implementation of the theory in both simulation and clinical studies are presented. The method is illustrated with a a brief study of segmentation methods for prostate cancer.

  16. A fully automated algorithm under modified FCM framework for improved brain MR image segmentation.

    PubMed

    Sikka, Karan; Sinha, Nitesh; Singh, Pankaj K; Mishra, Amit K

    2009-09-01

    Automated brain magnetic resonance image (MRI) segmentation is a complex problem especially if accompanied by quality depreciating factors such as intensity inhomogeneity and noise. This article presents a new algorithm for automated segmentation of both normal and diseased brain MRI. An entropy driven homomorphic filtering technique has been employed in this work to remove the bias field. The initial cluster centers are estimated using a proposed algorithm called histogram-based local peak merger using adaptive window. Subsequently, a modified fuzzy c-mean (MFCM) technique using the neighborhood pixel considerations is applied. Finally, a new technique called neighborhood-based membership ambiguity correction (NMAC) has been used for smoothing the boundaries between different tissue classes as well as to remove small pixel level noise, which appear as misclassified pixels even after the MFCM approach. NMAC leads to much sharper boundaries between tissues and, hence, has been found to be highly effective in prominently estimating the tissue and tumor areas in a brain MR scan. The algorithm has been validated against MFCM and FMRIB software library using MRI scans from BrainWeb. Superior results to those achieved with MFCM technique have been observed along with the collateral advantages of fully automatic segmentation, faster computation and faster convergence of the objective function.

  17. Evolutionary algorithms with segment-based search for multiobjective optimization problems.

    PubMed

    Li, Miqing; Yang, Shengxiang; Li, Ke; Liu, Xiaohui

    2014-08-01

    This paper proposes a variation operator, called segment-based search (SBS), to improve the performance of evolutionary algorithms on continuous multiobjective optimization problems. SBS divides the search space into many small segments according to the evolutionary information feedback from the set of current optimal solutions. Two operations, micro-jumping and macro-jumping, are implemented upon these segments in order to guide an efficient information exchange among "good" individuals. Moreover, the running of SBS is adaptive according to the current evolutionary status. SBS is activated only when the population evolves slowly, depending on general genetic operators (e.g., mutation and crossover). A comprehensive set of 36 test problems is employed for experimental verification. The influence of two algorithm settings (i.e., the dimensionality and boundary relaxation strategy) and two probability parameters in SBS (i.e., the SBS rate and micro-jumping proportion) are investigated in detail. Moreover, an empirical comparative study with three representative variation operators is carried out. Experimental results show that the incorporation of SBS into the optimization process can improve the performance of evolutionary algorithms for multiobjective optimization problems.

  18. Statistical Learning Algorithm for In-situ and Invasive Breast Carcinoma Segmentation

    PubMed Central

    Jayender, Jagadeesan; Gombos, Eva; Chikarmane, Sona; Dabydeen, Donnette; Jolesz, Ferenc A.; Vosburgh, Kirby G.

    2013-01-01

    DCE-MRI has proven to be a highly sensitive imaging modality in diagnosing breast cancers. However, analyzing the DCE-MRI is time-consuming and prone to errors due to the large volume of data. Mathematical models to quantify contrast perfusion, such as the Black Box methods and Pharmacokinetic analysis, are inaccurate, sensitive to noise and depend on a large number of external factors such as imaging parameters, patient physiology, arterial input function, fitting algorithms etc., leading to inaccurate diagnosis. In this paper, we have developed a novel Statistical Learning Algorithm for Tumor Segmentation (SLATS) based on Hidden Markov Models to auto-segment regions of angiogenesis, corresponding to tumor. The SLATS algorithm has been trained to identify voxels belonging to the tumor class using the time-intensity curve, first and second derivatives of the intensity curves (“velocity” and “acceleration” respectively) and a composite vector consisting of a concatenation of the intensity, velocity and acceleration vectors. The results of SLATS trained for the four vectors has been shown for 22 Invasive Ductal Carcinoma (IDC) and 19 Ductal Carcinoma In Situ (DCIS) cases. The SLATS trained for the velocity tuple shows the best performance in delineating the tumors when compared with the segmentation performed by an expert radiologist and the output of a commercially available software, CADstream. PMID:23693000

  19. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  20. An image segmentation based on a genetic algorithm for determining soil coverage by crop residues.

    PubMed

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P; Pajares, Gonzalo; del Arco, Maria J Sanchez; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm "El Encín" in Alcalá de Henares (Madrid, Spain).

  1. Image based hair segmentation algorithm for the application of automatic facial caricature synthesis.

    PubMed

    Shen, Yehu; Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  2. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  3. Automatic segmentation algorithm for the extraction of lumen region and boundary from endoscopic images.

    PubMed

    Tian, H; Srikanthan, T; Vijayan Asari, K

    2001-01-01

    A new segmentation algorithm for lumen region detection and boundary extraction from gastro-intestinal (GI) images is presented. The proposed algorithm consists of two steps. First, a preliminary region of interest (ROI) representing the GI lumen is segmented by an adaptive progressive thresholding (APT) technique. Then, an adaptive filter, the Iris filter, is applied to the ROI to determine the actual region. It has been observed that the combined APT-Iris filter technique can enhance and detect the unclear boundaries in the lumen region of GI images and thus produces a more accurate lumen region, compared with the existing techniques. Experiments are carried out to determine the maximum error on the extracted boundary with respect to an expert-annotated boundary technique. Investigations show that, based on the experimental results obtained from 50 endoscopic images, the maximum error is reduced by up to 72 pixels for a 256 x 256 image representation compared with other existing techniques. In addition, a new boundary extraction algorithm, based on a heuristic search on the neighbourhood pixels, is employed to obtain a connected single pixel width outer boundary using two preferential sequence windows. Experimental results are also presented to justify the effectiveness of the proposed algorithm.

  4. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm

    PubMed Central

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively. PMID:26284175

  5. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm.

    PubMed

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively.

  6. A novel supervised trajectory segmentation algorithm identifies distinct types of human adenovirus motion in host cells.

    PubMed

    Helmuth, Jo A; Burckhardt, Christoph J; Koumoutsakos, Petros; Greber, Urs F; Sbalzarini, Ivo F

    2007-09-01

    Biological trajectories can be characterized by transient patterns that may provide insight into the interactions of the moving object with its immediate environment. The accurate and automated identification of trajectory motifs is important for the understanding of the underlying mechanisms. In this work, we develop a novel trajectory segmentation algorithm based on supervised support vector classification. The algorithm is validated on synthetic data and applied to the identification of trajectory fingerprints of fluorescently tagged human adenovirus particles in live cells. In virus trajectories on the cell surface, periods of confined motion, slow drift, and fast drift are efficiently detected. Additionally, directed motion is found for viruses in the cytoplasm. The algorithm enables the linking of microscopic observations to molecular phenomena that are critical in many biological processes, including infectious pathogen entry and signal transduction.

  7. Hepatic Arterial Configuration in Relation to the Segmental Anatomy of the Liver; Observations on MDCT and DSA Relevant to Radioembolization Treatment

    SciTech Connect

    Hoven, Andor F. van den Leeuwen, Maarten S. van Lam, Marnix G. E. H. Bosch, Maurice A. A. J. van den

    2015-02-15

    PurposeCurrent anatomical classifications do not include all variants relevant for radioembolization (RE). The purpose of this study was to assess the individual hepatic arterial configuration and segmental vascularization pattern and to develop an individualized RE treatment strategy based on an extended classification.MethodsThe hepatic vascular anatomy was assessed on MDCT and DSA in patients who received a workup for RE between February 2009 and November 2012. Reconstructed MDCT studies were assessed to determine the hepatic arterial configuration (origin of every hepatic arterial branch, branching pattern and anatomical course) and the hepatic segmental vascularization territory of all branches. Aberrant hepatic arteries were defined as hepatic arterial branches that did not originate from the celiac axis/CHA/PHA. Early branching patterns were defined as hepatic arterial branches originating from the celiac axis/CHA.ResultsThe hepatic arterial configuration and segmental vascularization pattern could be assessed in 110 of 133 patients. In 59 patients (54 %), no aberrant hepatic arteries or early branching was observed. Fourteen patients without aberrant hepatic arteries (13 %) had an early branching pattern. In the 37 patients (34 %) with aberrant hepatic arteries, five also had an early branching pattern. Sixteen different hepatic arterial segmental vascularization patterns were identified and described, differing by the presence of aberrant hepatic arteries, their respective vascular territory, and origin of the artery vascularizing segment four.ConclusionsThe hepatic arterial configuration and segmental vascularization pattern show marked individual variability beyond well-known classifications of anatomical variants. We developed an individualized RE treatment strategy based on an extended anatomical classification.

  8. Improved centerline tree detection of diseased peripheral arteries with a cascading algorithm for vascular segmentation.

    PubMed

    Lidayová, Kristína; Frimmel, Hans; Bengtsson, Ewert; Smedby, Örjan

    2017-04-01

    Vascular segmentation plays an important role in the assessment of peripheral arterial disease. The segmentation is very challenging especially for arteries with severe stenosis or complete occlusion. We present a cascading algorithm for vascular centerline tree detection specializing in detecting centerlines in diseased peripheral arteries. It takes a three-dimensional computed tomography angiography (CTA) volume and returns a vascular centerline tree, which can be used for accelerating and facilitating the vascular segmentation. The algorithm consists of four levels, two of which detect healthy arteries of varying sizes and two that specialize in different types of vascular pathology: severe calcification and occlusion. We perform four main steps at each level: appropriate parameters for each level are selected automatically, a set of centrally located voxels is detected, these voxels are connected together based on the connection criteria, and the resulting centerline tree is corrected from spurious branches. The proposed method was tested on 25 CTA scans of the lower limbs, achieving an average overlap rate of 89% and an average detection rate of 82%. The average execution time using four CPU cores was 70 s, and the technique was successful also in detecting very distal artery branches, e.g., in the foot.

  9. Automatic Segmentation of Phalanx and Epiphyseal/Metaphyseal Region by Gamma Parameter Enhancement Algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, C. W.; Chen, C. Y.; Jong, T. L.; Liu, T. C.; Chiu, C. H.

    2012-01-01

    The performance of bone age assessment is highly correlated with the extraction of bony tissue from soft tissues, and the key problem is how to successfully separate epiphyseal/metaphyseal region of interests (EMROIs) from the background and soft tissue. In our experiment, a series of image preprocessing procedures are used to exclude the background and locate the EMROIs of left-hand radiographs. Subsequently, automatic gamma parameter enhancement is applied to test the two segmentation methods (adaptive two-means clustering algorithm and gradient vector flow snake) among children of different age (the age from 2 to 16 years for 80 girls and boys). Four error measurements of misclassification error, relative foreground area error, modified Hausdorff distances, and edge mismatch, are included to evaluate the segmentation performance. The result shows that the two segmentation algorithms are corresponding to different ranges of optimal gamma parameters. Furthermore, the margin of EMROIs can be obtained more precisely by developing an automatic bone age assessment method with the gamma parameter enhancement.

  10. A contiguity-enhanced k-means clustering algorithm for unsupervised multispectral image segmentation

    SciTech Connect

    Theiler, J.; Gisler, G.

    1997-07-01

    The recent and continuing construction of multi and hyper spectral imagers will provide detailed data cubes with information in both the spatial and spectral domain. This data shows great promise for remote sensing applications ranging from environmental and agricultural to national security interests. The reduction of this voluminous data to useful intermediate forms is necessary both for downlinking all those bits and for interpreting them. Smart onboard hardware is required, as well as sophisticated earth bound processing. A segmented image (in which the multispectral data in each pixel is classified into one of a small number of categories) is one kind of intermediate form which provides some measure of data compression. Traditional image segmentation algorithms treat pixels independently and cluster the pixels according only to their spectral information. This neglects the implicit spatial information that is available in the image. We will suggest a simple approach; a variant of the standard k-means algorithm which uses both spatial and spectral properties of the image. The segmented image has the property that pixels which are spatially contiguous are more likely to be in the same class than are random pairs of pixels. This property naturally comes at some cost in terms of the compactness of the clusters in the spectral domain, but we have found that the spatial contiguity and spectral compactness properties are nearly orthogonal, which means that we can make considerable improvements in the one with minimal loss in the other.

  11. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    SciTech Connect

    Bae, JangPyo; Kim, Namkug Lee, Sang Min; Seo, Joon Beom; Kim, Hee Chan

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  12. Reconstruction-by-Dilation and Top-Hat Algorithms for Contrast Enhancement and Segmentation of Microcalcifications in Digital Mammograms

    SciTech Connect

    Diaz, Claudia C.

    2007-11-27

    I present some results of contrast enhancement and segmentation of microcalcifications in digital mammograms. These mammograms were obtained from MIAS-minidatabase and using a CR to digitize images. White-top-hat and black-top-hat transformations were used to improve the contrast of images, while reconstruction-by-dilation algorithm was used to emphasize the microcalcifications over the tissues. Segmentation was done using different gradient matrices. These algorithms intended to show some details which were not evident in original images.

  13. [Layer-dependent multi-constrained algorithms based on improved level set for segmentation of teeth MRI-UTE image].

    PubMed

    Zheng, Caixian; Xu, Xiu; Wang, Cheng; Ye, Xiuxia

    2013-07-01

    To introduce algorithms for effective segmentation of teeth MRI-UTE image. To construct second-segmentation algorithm process based on layer-dependent multi-constrained method. Firstly, a level set method was used to segment the initial boundary from the region determined by user in the reference slice. Secondly, both crown and root of the tooth were segmented by the improved level set method which took the information of the former layer's result as constraint conditions. Finally, the improved level set based on the information of the former and later layer's results was executed for the second time to improve the accuracy of segmentation, in which, the parameter of the overlapping ratio was considered. The accuracy was 86.98% for the first-segmentation and was increased to 88.35% for the second-segmentation. Compared to the two other methods, the accuracy of the algorithms provided was improved significantly (P < 0.05). The proposed algorithms can effectively achieve the segmentation of teeth MRI-UTE image and has a great improvement on accuracy.

  14. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    NASA Astrophysics Data System (ADS)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  15. Evaluation of automatic neonatal brain segmentation algorithms: the NeoBrainS12 challenge.

    PubMed

    Išgum, Ivana; Benders, Manon J N L; Avants, Brian; Cardoso, M Jorge; Counsell, Serena J; Gomez, Elda Fischi; Gui, Laura; Hűppi, Petra S; Kersbergen, Karina J; Makropoulos, Antonios; Melbourne, Andrew; Moeskops, Pim; Mol, Christian P; Kuklisova-Murgasova, Maria; Rueckert, Daniel; Schnabel, Julia A; Srhoj-Egekher, Vedran; Wu, Jue; Wang, Siying; de Vries, Linda S; Viergever, Max A

    2015-02-01

    A number of algorithms for brain segmentation in preterm born infants have been published, but a reliable comparison of their performance is lacking. The NeoBrainS12 study (http://neobrains12.isi.uu.nl), providing three different image sets of preterm born infants, was set up to provide such a comparison. These sets are (i) axial scans acquired at 40 weeks corrected age, (ii) coronal scans acquired at 30 weeks corrected age and (iii) coronal scans acquired at 40 weeks corrected age. Each of these three sets consists of three T1- and T2-weighted MR images of the brain acquired with a 3T MRI scanner. The task was to segment cortical grey matter, non-myelinated and myelinated white matter, brainstem, basal ganglia and thalami, cerebellum, and cerebrospinal fluid in the ventricles and in the extracerebral space separately. Any team could upload the results and all segmentations were evaluated in the same way. This paper presents the results of eight participating teams. The results demonstrate that the participating methods were able to segment all tissue classes well, except myelinated white matter.

  16. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Astrophysics Data System (ADS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  17. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  18. A color and shape based algorithm for segmentation of white blood cells in peripheral blood and bone marrow images.

    PubMed

    Arslan, Salim; Ozyurek, Emel; Gunduz-Demir, Cigdem

    2014-06-01

    Computer-based imaging systems are becoming important tools for quantitative assessment of peripheral blood and bone marrow samples to help experts diagnose blood disorders such as acute leukemia. These systems generally initiate a segmentation stage where white blood cells are separated from the background and other nonsalient objects. As the success of such imaging systems mainly depends on the accuracy of this stage, studies attach great importance for developing accurate segmentation algorithms. Although previous studies give promising results for segmentation of sparsely distributed normal white blood cells, only a few of them focus on segmenting touching and overlapping cell clusters, which is usually the case when leukemic cells are present. In this article, we present a new algorithm for segmentation of both normal and leukemic cells in peripheral blood and bone marrow images. In this algorithm, we propose to model color and shape characteristics of white blood cells by defining two transformations and introduce an efficient use of these transformations in a marker-controlled watershed algorithm. Particularly, these domain specific characteristics are used to identify markers and define the marking function of the watershed algorithm as well as to eliminate false white blood cells in a postprocessing step. Working on 650 white blood cells in peripheral blood and bone marrow images, our experiments reveal that the proposed algorithm improves the segmentation performance compared with its counterparts, leading to high accuracies for both sparsely distributed normal white blood cells and dense leukemic cell clusters. © 2014 International Society for Advancement of Cytometry.

  19. Comparative Local Quality Assessment of 3D Medical Image Segmentations with Focus on Statistical Shape Model-Based Algorithms.

    PubMed

    Landesberger, Tatiana von; Basgier, Dennis; Becker, Meike

    2016-12-01

    The quality of automatic 3D medical segmentation algorithms needs to be assessed on test datasets comprising several 3D images (i.e., instances of an organ). The experts need to compare the segmentation quality across the dataset in order to detect systematic segmentation problems. However, such comparative evaluation is not supported well by current methods. We present a novel system for assessing and comparing segmentation quality in a dataset with multiple 3D images. The data is analyzed and visualized in several views. We detect and show regions with systematic segmentation quality characteristics. For this purpose, we extended a hierarchical clustering algorithm with a connectivity criterion. We combine quality values across the dataset for determining regions with characteristic segmentation quality across instances. Using our system, the experts can also identify 3D segmentations with extraordinary quality characteristics. While we focus on algorithms based on statistical shape models, our approach can also be applied to cases, where landmark correspondences among instances can be established. We applied our approach to three real datasets: liver, cochlea and facial nerve. The segmentation experts were able to identify organ regions with systematic segmentation characteristics as well as to detect outlier instances.

  20. Digital Terrain from a Two-Step Segmentation and Outlier-Based Algorithm

    NASA Astrophysics Data System (ADS)

    Hingee, Kassel; Caccetta, Peter; Caccetta, Louis; Wu, Xiaoliang; Devereaux, Drew

    2016-06-01

    We present a novel ground filter for remotely sensed height data. Our filter has two phases: the first phase segments the DSM with a slope threshold and uses gradient direction to identify candidate ground segments; the second phase fits surfaces to the candidate ground points and removes outliers. Digital terrain is obtained by a surface fit to the final set of ground points. We tested the new algorithm on digital surface models (DSMs) for a 9600km2 region around Perth, Australia. This region contains a large mix of land uses (urban, grassland, native forest and plantation forest) and includes both a sandy coastal plain and a hillier region (elevations up to 0.5km). The DSMs are captured annually at 0.2m resolution using aerial stereo photography, resulting in 1.2TB of input data per annum. Overall accuracy of the filter was estimated to be 89.6% and on a small semi-rural subset our algorithm was found to have 40% fewer errors compared to Inpho's Match-T algorithm.

  1. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  2. New second-order difference algorithm for image segmentation based on cellular neural networks (CNNs)

    NASA Astrophysics Data System (ADS)

    Meng, Shukai; Mo, Yu L.

    2001-09-01

    Image segmentation is one of the most important operations in many image analysis problems, which is the process that subdivides an image into its constituents and extracts those parts of interest. In this paper, we present a new second order difference gray-scale image segmentation algorithm based on cellular neural networks. A 3x3 CNN cloning template is applied, which can make smooth processing and has a good ability to deal with the conflict between the capability of noise resistance and the edge detection of complex shapes. We use second order difference operator to calculate the coefficients of the control template, which are not constant but rather depend on the input gray-scale values. It is similar to Contour Extraction CNN in construction, but there are some different in algorithm. The result of experiment shows that the second order difference CNN has a good capability in edge detection. It is better than Contour Extraction CNN in detail detection and more effective than the Laplacian of Gauss (LOG) algorithm.

  3. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    NASA Astrophysics Data System (ADS)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  4. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    PubMed

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  5. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  6. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells.

    PubMed

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-07-14

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture.

  7. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells

    PubMed Central

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-01-01

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908

  8. Automatic segmentation of the liver using multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images

    NASA Astrophysics Data System (ADS)

    Jang, Yujin; Hong, Helen; Chung, Jin Wook; Yoon, Young Ho

    2012-02-01

    We propose an effective technique for the extraction of liver boundary based on multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images. Our method is composed of four main steps. First, for extracting an optimal volume circumscribing a liver, lower and side boundaries are defined by positional information of pelvis and rib. An upper boundary is defined by separating the lungs and heart from CT images. Second, for extracting an initial liver volume, optimal liver volume is smoothed by anisotropic diffusion filtering and is segmented using adaptively selected threshold value. Third, for removing neighbor organs from initial liver volume, morphological opening and connected component labeling are applied to multiple planes. Finally, for refining the liver boundaries, deformable surface model is applied to a posterior liver surface and missing left robe in previous step. Then, probability summation map is generated by calculating regional information of the segmented liver in coronal plane, which is used for restoring the inaccurate liver boundaries. Experimental results show that our segmentation method can accurately extract liver boundaries without leakage to neighbor organs in spite of various liver shape and ambiguous boundary.

  9. Infrared active polarimetric imaging system controlled by image segmentation algorithms: application to decamouflage

    NASA Astrophysics Data System (ADS)

    Vannier, Nicolas; Goudail, François; Plassart, Corentin; Boffety, Matthieu; Feneyrou, Patrick; Leviandier, Luc; Galland, Frédéric; Bertaux, Nicolas

    2016-05-01

    We describe an active polarimetric imager with laser illumination at 1.5 µm that can generate any illumination and analysis polarization state on the Poincar sphere. Thanks to its full polarization agility and to image analysis of the scene with an ultrafast active-contour based segmentation algorithm, it can perform adaptive polarimetric contrast optimization. We demonstrate the capacity of this imager to detect manufactured objects in different types of environments for such applications as decamouflage and hazardous object detection. We compare two imaging modes having different number of polarimetric degrees of freedom and underline the characteristics that a polarimetric imager aimed at this type of applications should possess.

  10. An automatic multi-lead electrocardiogram segmentation algorithm based on abrupt change detection.

    PubMed

    Illanes-Manriquez, Alfredo

    2010-01-01

    Automatic detection of electrocardiogram (ECG) waves provides important information for cardiac disease diagnosis. In this paper a new algorithm is proposed for automatic ECG segmentation based on multi-lead ECG processing. Two auxiliary signals are computed from the first and second derivatives of several ECG leads signals. One auxiliary signal is used for R peak detection and the other for ECG waves delimitation. A statistical hypothesis testing is finally applied to one of the auxiliary signals in order to detect abrupt mean changes. Preliminary experimental results show that the detected mean changes instants coincide with the boundaries of the ECG waves.

  11. 3-D Ultrasound Segmentation of the Placenta Using the Random Walker Algorithm: Reliability and Agreement.

    PubMed

    Stevenson, Gordon N; Collins, Sally L; Ding, Jane; Impey, Lawrence; Noble, J Alison

    2015-12-01

    Volumetric segmentation of the placenta using 3-D ultrasound is currently performed clinically to investigate correlation between organ volume and fetal outcome or pathology. Previously, interpolative or semi-automatic contour-based methodologies were used to provide volumetric results. We describe the validation of an original random walker (RW)-based algorithm against manual segmentation and an existing semi-automated method, virtual organ computer-aided analysis (VOCAL), using initialization time, inter- and intra-observer variability of volumetric measurements and quantification accuracy (with respect to manual segmentation) as metrics of success. Both semi-automatic methods require initialization. Therefore, the first experiment compared initialization times. Initialization was timed by one observer using 20 subjects. This revealed significant differences (p < 0.001) in time taken to initialize the VOCAL method compared with the RW method. In the second experiment, 10 subjects were used to analyze intra-/inter-observer variability between two observers. Bland-Altman plots were used to analyze variability combined with intra- and inter-observer variability measured by intra-class correlation coefficients, which were reported for all three methods. Intra-class correlation coefficient values for intra-observer variability were higher for the RW method than for VOCAL, and both were similar to manual segmentation. Inter-observer variability was 0.94 (0.88, 0.97), 0.91 (0.81, 0.95) and 0.80 (0.61, 0.90) for manual, RW and VOCAL, respectively. Finally, a third observer with no prior ultrasound experience was introduced and volumetric differences from manual segmentation were reported. Dice similarity coefficients for observers 1, 2 and 3 were respectively 0.84 ± 0.12, 0.94 ± 0.08 and 0.84 ± 0.11, and the mean was 0.87 ± 0.13. The RW algorithm was found to provide results concordant with those for manual segmentation and to outperform VOCAL in aspects of observer

  12. Optimized adaptation algorithm for HEVC/H.265 dynamic adaptive streaming over HTTP using variable segment duration

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2016-04-01

    Adaptive video streaming using HTTP has become popular in recent years for commercial video delivery. The recent MPEG-DASH standard allows interoperability and adaptability between servers and clients from different vendors. The delivery of the MPD (Media Presentation Description) files in DASH and the DASH client behaviours are beyond the scope of the DASH standard. However, the different adaptation algorithms employed by the clients do affect the overall performance of the system and users' QoE (Quality of Experience), hence the need for research in this field. Moreover, standard DASH delivery is based on fixed segments of the video. However, there is no standard segment duration for DASH where various fixed segment durations have been employed by different commercial solutions and researchers with their own individual merits. Most recently, the use of variable segment duration in DASH has emerged but only a few preliminary studies without practical implementation exist. In addition, such a technique requires a DASH client to be aware of segment duration variations, and this requirement and the corresponding implications on the DASH system design have not been investigated. This paper proposes a segment-duration-aware bandwidth estimation and next-segment selection adaptation strategy for DASH. Firstly, an MPD file extension scheme to support variable segment duration is proposed and implemented in a realistic hardware testbed. The scheme is tested on a DASH client, and the tests and analysis have led to an insight on the time to download next segment and the buffer behaviour when fetching and switching between segments of different playback durations. Issues like sustained buffering when switching between segments of different durations and slow response to changing network conditions are highlighted and investigated. An enhanced adaptation algorithm is then proposed to accurately estimate the bandwidth and precisely determine the time to download the next

  13. Segmentation algorithm via Cellular Neural/Nonlinear Network: implementation on Bio-inspired hardware platform

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Vecchio, Pietro; Grassi, Giuseppe

    2011-12-01

    The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for Foreman and Car-phone video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frames/s. Comparisons with existing CNN-based methods show that the conceived approach is more accurate, thus representing a good trade-off between real-time requirements and accuracy.

  14. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  15. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  16. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study.

    PubMed

    Rudyanto, Rina D; Kerkstra, Sjoerd; van Rikxoort, Eva M; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, Ilkay; Ünay, Devrim; Kadipaşaoğlu, Kamuran; Estépar, Raúl San José; Ross, James C; Washko, George R; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C; Fabijanska, Anna; Smistad, Erik; Elster, Anne C; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G H; Campo, Arantza; Prokop, Mathias; de Jong, Pim A; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2014-10-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.

  17. A quantum mechanics-based algorithm for vessel segmentation in retinal images

    NASA Astrophysics Data System (ADS)

    Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa

    2016-06-01

    Blood vessel segmentation is an important step in retinal image analysis. It is one of the steps required for computer-aided detection of ophthalmic diseases. In this paper, a novel quantum mechanics-based algorithm for retinal vessel segmentation is presented. The algorithm consists of three major steps. The first step is the preprocessing of the images to prepare the images for further processing. The second step is feature extraction where a set of four features is generated at each image pixel. These features are then combined using a nonlinear transformation for dimensionality reduction. The final step is applying a recently proposed quantum mechanics-based framework for image processing. In this step, pixels are mapped to quantum systems that are allowed to evolve from an initial state to a final state governed by Schrödinger's equation. The evolution is controlled by the Hamiltonian operator which is a function of the extracted features at each pixel. A measurement step is consequently performed to determine whether the pixel belongs to vessel or non-vessel classes. Many functional forms of the Hamiltonian are proposed, and the best performing form was selected. The algorithm is tested on the publicly available DRIVE database. The average results for sensitivity, specificity, and accuracy are 80.29, 97.34, and 95.83 %, respectively. These results are compared to some recently published techniques showing the superior performance of the proposed method. Finally, the implementation of the algorithm on a quantum computer and the challenges facing this implementation are introduced.

  18. Segments.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Presents a market taxonomy for higher education, including what it reveals about the structure of the market, the model's technical attributes, and its capacity to explain pricing behavior. Details the identification of the principle seams separating one market segment from another and how student aspirations help to organize the market, making…

  19. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  20. SAR Image Segmentation with Unknown Number of Classes Combined Voronoi Tessellation and Rjmcmc Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Q. H.; Li, Y.; Wang, Y.

    2016-06-01

    This paper presents a novel segmentation method for automatically determining the number of classes in Synthetic Aperture Radar (SAR) images by combining Voronoi tessellation and Reversible Jump Markov Chain Monte Carlo (RJMCMC) strategy. Instead of giving the number of classes a priori, it is considered as a random variable and subject to a Poisson distribution. Based on Voronoi tessellation, the image is divided into homogeneous polygons. By Bayesian paradigm, a posterior distribution which characterizes the segmentation and model parameters conditional on a given SAR image can be obtained up to a normalizing constant; Then, a Revisable Jump Markov Chain Monte Carlo(RJMCMC) algorithm involving six move types is designed to simulate the posterior distribution, the move types including: splitting or merging real classes, updating parameter vector, updating label field, moving positions of generating points, birth or death of generating points and birth or death of an empty class. Experimental results with real and simulated SAR images demonstrate that the proposed method can determine the number of classes automatically and segment homogeneous regions well.

  1. Bilayered anatomically constrained split-and-merge expectation maximisation algorithm (BiASM) for brain segmentation

    NASA Astrophysics Data System (ADS)

    Sudre, Carole H.; Cardoso, M. Jorge; Ourselin, Sébastien

    2014-03-01

    Dealing with pathological tissues is a very challenging task in medical brain segmentation. The presence of pathology can indeed bias the ultimate results when the model chosen is not appropriate and lead to missegmentations and errors in the model parameters. Model fit and segmentation accuracy are impaired by the lack of flexibility of the model used to represent the data. In this work, based on a finite Gaussian mixture model, we dynamically introduce extra degrees of freedom so that each anatomical tissue considered is modelled as a mixture of Gaussian components. The choice of the appropriate number of components per tissue class relies on a model selection criterion. Its purpose is to balance the complexity of the model with the quality of the model fit in order to avoid overfitting while allowing flexibility. The parameters optimisation, constrained with the additional knowledge brought by probabilistic anatomical atlases, follows the expectation maximisation (EM) framework. Split-and-merge operations bring the new flexibility to the model along with a data-driven adaptation. The proposed methodology appears to improve the segmentation when pathological tissue are present as well as the model fit when compared to an atlas-based expectation maximisation algorithm with a unique component per tissue class. These improvements in the modelling might bring new insight in the characterisation of pathological tissues as well as in the modelling of partial volume effect.

  2. Segmentation and detection of breast cancer in mammograms combining wavelet analysis and genetic algorithm.

    PubMed

    Pereira, Danilo Cesar; Ramos, Rodrigo Pereira; do Nascimento, Marcelo Zanchetta

    2014-04-01

    In Brazil, the National Cancer Institute (INCA) reports more than 50,000 new cases of the disease, with risk of 51 cases per 100,000 women. Radiographic images obtained from mammography equipments are one of the most frequently used techniques for helping in early diagnosis. Due to factors related to cost and professional experience, in the last two decades computer systems to support detection (Computer-Aided Detection - CADe) and diagnosis (Computer-Aided Diagnosis - CADx) have been developed in order to assist experts in detection of abnormalities in their initial stages. Despite the large number of researches on CADe and CADx systems, there is still a need for improved computerized methods. Nowadays, there is a growing concern with the sensitivity and reliability of abnormalities diagnosis in both views of breast mammographic images, namely cranio-caudal (CC) and medio-lateral oblique (MLO). This paper presents a set of computational tools to aid segmentation and detection of mammograms that contained mass or masses in CC and MLO views. An artifact removal algorithm is first implemented followed by an image denoising and gray-level enhancement method based on wavelet transform and Wiener filter. Finally, a method for detection and segmentation of masses using multiple thresholding, wavelet transform and genetic algorithm is employed in mammograms which were randomly selected from the Digital Database for Screening Mammography (DDSM). The developed computer method was quantitatively evaluated using the area overlap metric (AOM). The mean ± standard deviation value of AOM for the proposed method was 79.2 ± 8%. The experiments demonstrate that the proposed method has a strong potential to be used as the basis for mammogram mass segmentation in CC and MLO views. Another important aspect is that the method overcomes the limitation of analyzing only CC and MLO views. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  4. The algorithm study for using the back propagation neural network in CT image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi

    2017-01-01

    Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.

  5. Segmenting clouds from space : a hybrid multispectral classification algorithm for satellite imagery.

    SciTech Connect

    Post, Brian Nelson; Wilson, Mark P.; Smith, Jody Lynn; Wehlburg, Joseph Cornelius; Nandy, Prabal

    2005-07-01

    This paper reports on a novel approach to atmospheric cloud segmentation from a space based multi-spectral pushbroom satellite system. The satellite collects 15 spectral bands ranging from visible, 0.45 um, to long wave infra-red (IR), 10.7um. The images are radiometrically calibrated and have ground sample distances (GSD) of 5 meters for visible to very near IR bands and a GSD of 20 meters for near IR to long wave IR. The algorithm consists of a hybrid-classification system in the sense that supervised and unsupervised networks are used in conjunction. For performance evaluation, a series of numerical comparisons to human derived cloud borders were performed. A set of 33 scenes were selected to represent various climate zones with different land cover from around the world. The algorithm consisted of the following. Band separation was performed to find the band combinations which form significant separation between cloud and background classes. The potential bands are fed into a K-Means clustering algorithm in order to identify areas in the image which have similar centroids. Each cluster is then compared to the cloud and background prototypes using the Jeffries-Matusita distance. A minimum distance is found and each unknown cluster is assigned to their appropriate prototype. A classification rate of 88% was found when using one short wave IR band and one mid-wave IR band. Past investigators have reported segmentation accuracies ranging from 67% to 80%, many of which require human intervention. A sensitivity of 75% and specificity of 90% were reported as well.

  6. Fast and robust segmentation of solar EUV images: algorithm and results for solar cycle 23

    NASA Astrophysics Data System (ADS)

    Barra, V.; Delouille, V.; Kretzschmar, M.; Hochedez, J.-F.

    2009-10-01

    Context: The study of the variability of the solar corona and the monitoring of coronal holes, quiet sun and active regions are of great importance in astrophysics as well as for space weather and space climate applications. Aims: In a previous work, we presented the spatial possibilistic clustering algorithm (SPoCA). This is a multi-channel unsupervised spatially-constrained fuzzy clustering method that automatically segments solar extreme ultraviolet (EUV) images into regions of interest. The results we reported on SoHO-EIT images taken from February 1997 to May 2005 were consistent with previous knowledge in terms of both areas and intensity estimations. However, they presented some artifacts due to the method itself. Methods: Herein, we propose a new algorithm, based on SPoCA, that removes these artifacts. We focus on two points: the definition of an optimal clustering with respect to the regions of interest, and the accurate definition of the cluster edges. We moreover propose methodological extensions to this method, and we illustrate these extensions with the automatic tracking of active regions. Results: The much improved algorithm can decompose the whole set of EIT solar images over the 23rd solar cycle into regions that can clearly be identified as quiet sun, coronal hole and active region. The variations of the parameters resulting from the segmentation, i.e. the area, mean intensity, and relative contribution to the solar irradiance, are consistent with previous results and thus validate the decomposition. Furthermore, we find indications for a small variation of the mean intensity of each region in correlation with the solar cycle. Conclusions: The method is generic enough to allow the introduction of other channels or data. New applications are now expected, e.g. related to SDO-AIA data.

  7. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  8. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    PubMed

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  9. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering.

    PubMed

    Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu

    2015-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM).

  10. A two-dimensional Segmented Boundary Algorithm for complex moving solid boundaries in Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Khorasanizade, Sh.; Sousa, J. M. M.

    2016-03-01

    A Segmented Boundary Algorithm (SBA) is proposed to deal with complex boundaries and moving bodies in Smoothed Particle Hydrodynamics (SPH). Boundaries are formed in this algorithm with chains of lines obtained from the decomposition of two-dimensional objects, based on simple line geometry. Various two-dimensional, viscous fluid flow cases have been studied here using a truly incompressible SPH method with the aim of assessing the capabilities of the SBA. Firstly, the flow over a stationary circular cylinder in a plane channel was analyzed at steady and unsteady regimes, for a single value of blockage ratio. Subsequently, the flow produced by a moving circular cylinder with a prescribed acceleration inside a plane channel was investigated as well. Next, the simulation of the flow generated by the impulsive start of a flat plate, again inside a plane channel, has been carried out. This was followed by the study of confined sedimentation of an elliptic body subjected to gravity, for various density ratios. The set of test cases was completed with the simulation of periodic flow around a sunflower-shaped object. Extensive comparisons of the results obtained here with published data have demonstrated the accuracy and effectiveness of the proposed algorithms, namely in cases involving complex geometries and moving bodies.

  11. An efficient algorithm for multiphase image segmentation with intensity bias correction.

    PubMed

    Zhang, Haili; Ye, Xiaojing; Chen, Yunmei

    2013-10-01

    This paper presents a variational model for simultaneous multiphase segmentation and intensity bias estimation for images corrupted by strong noise and intensity inhomogeneity. Since the pixel intensities are not reliable samples for region statistics due to the presence of noise and intensity bias, we use local information based on the joint density within image patches to perform image partition. Hence, the pixel intensity has a multiplicative distribution structure. Then, the maximum-a-posteriori (MAP) principle with those pixel density functions generates the model. To tackle the computational problem of the resultant nonsmooth nonconvex minimization, we relax the constraint on the characteristic functions of partition regions, and apply primal-dual alternating gradient projections to construct a very efficient numerical algorithm. We show that all the variables have closed-form solutions in each iteration, and the computation complexity is very low. In particular, the algorithm involves only regular convolutions and pointwise projections onto the unit ball and canonical simplex. Numerical tests on a variety of images demonstrate that the proposed algorithm is robust, stable, and attains significant improvements on accuracy and efficiency over the state-of-the-arts.

  12. SU-C-207B-05: Tissue Segmentation of Computed Tomography Images Using a Random Forest Algorithm: A Feasibility Study

    SciTech Connect

    Polan, D; Brady, S; Kaufman, R

    2016-06-15

    Purpose: Develop an automated Random Forest algorithm for tissue segmentation of CT examinations. Methods: Seven materials were classified for segmentation: background, lung/internal gas, fat, muscle, solid organ parenchyma, blood/contrast, and bone using Matlab and the Trainable Weka Segmentation (TWS) plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance each evaluated over a pixel radius of 2n, (n = 0–4). Also noise reduction and edge preserving filters, Gaussian, bilateral, Kuwahara, and anisotropic diffusion, were evaluated. The algorithm used 200 trees with 2 features per node. A training data set was established using an anonymized patient’s (male, 20 yr, 72 kg) chest-abdomen-pelvis CT examination. To establish segmentation ground truth, the training data were manually segmented using Eclipse planning software, and an intra-observer reproducibility test was conducted. Six additional patient data sets were segmented based on classifier data generated from the training data. Accuracy of segmentation was determined by calculating the Dice similarity coefficient (DSC) between manual and auto segmented images. Results: The optimized autosegmentation algorithm resulted in 16 features calculated using maximum, mean, variance, and Gaussian blur filters with kernel radii of 1, 2, and 4 pixels, in addition to the original CT number, and Kuwahara filter (linear kernel of 19 pixels). Ground truth had a DSC of 0.94 (range: 0.90–0.99) for adult and 0.92 (range: 0.85–0.99) for pediatric data sets across all seven segmentation classes. The automated algorithm produced segmentation with an average DSC of 0.85 ± 0.04 (range: 0.81–1.00) for the adult patients, and 0.86 ± 0.03 (range: 0.80–0.99) for the pediatric patients. Conclusion: The TWS Random Forest auto-segmentation algorithm was optimized for CT environment, and able to segment seven material classes over a range of body habitus and CT

  13. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    PubMed

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus.

  14. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  15. A parallel point cloud clustering algorithm for subset segmentation and outlier detection

    NASA Astrophysics Data System (ADS)

    Teutsch, Christian; Trostmann, Erik; Berndt, Dirk

    2011-07-01

    We present a fast point cloud clustering technique which is suitable for outlier detection, object segmentation and region labeling for large multi-dimensional data sets. The basis is a minimal data structure similar to a kd-tree which enables us to detect connected subsets very fast. The proposed algorithms utilizing this tree structure are parallelizable which further increases the computation speed for very large data sets. The procedures given are a vital part of the data preprocessing. They improve the input data properties for a more reliable computation of surface measures, polygonal meshes and other visualization techniques. In order to show the effectiveness of our techniques we evaluate sets of point clouds from different 3D scanning devices.

  16. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation

    PubMed Central

    Poznyakovskiy, Anton A.; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  17. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    PubMed

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  18. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    PubMed

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Genetic algorithms as a useful tool for trabecular and cortical bone segmentation.

    PubMed

    Janc, K; Tarasiuk, J; Bonnet, A S; Lipinski, P

    2013-07-01

    The aim of this study was to find a semi-automatic method of bone segmentation on the basis of computed tomography (CT) scan series in order to recreate corresponding 3D objects. So, it was crucial for the segmentation to be smooth between adjacent scans. The concept of graphics pipeline computing was used, i.e. simple graphics filters such as threshold or gradient were processed in a manner that the output of one filter became the input of the second one resulting in so called pipeline. The input of the entire stream was the CT scan and the output corresponded to the binary mask showing where a given tissue is located in the input image. In this approach the main task consists in finding the suitable sequence, types and parameters of graphics filters building the pipeline. Because of the high number of desired parameters (in our case 96), it was decided to use a slightly modified genetic algorithm. To determine fitness value, the mask obtained from the parameters found through genetic algorithms (GA) was compared with those manually prepared. The numerical value corresponding to such a comparison has been defined by Dice's coefficient. Preparation of reference masks for a few scans among the several hundreds of them was the only action done manually by a human expert. Using this method, very good results both for trabecular and cortical bones were obtained. It has to be emphasized that as no real border exists between these two bone types, the manually prepared reference masks were quite conventional and therefore charged with errors. As GA is a non-deterministic method, the present work also contains a statistical analysis of the relations existing between various GA parameters and fitness function. Finally the best sets of the GA parameters are proposed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Enhancing a diffusion algorithm for 4D image segmentation using local information

    NASA Astrophysics Data System (ADS)

    Lösel, Philipp; Heuveline, Vincent

    2016-03-01

    Inspired by the diffusion of a particle, we present a novel approach for performing a semiautomatic segmentation of tomographic images in 3D, 4D or higher dimensions to meet the requirements of high-throughput measurements in a synchrotron X-ray microtomograph. Given a small number of 2D-slices with at least two manually labeled segments, one can either analytically determine the probability that an intelligently weighted random walk starting at one labeled pixel will be at a certain time at a specific position in the dataset or determine the probability approximately by performing several random walks. While the weights of a random walk take into account local information at the starting point, the random walk itself can be in any dimension. Starting a great number of random walks in each labeled pixel, a voxel in the dataset will be hit by several random walks over time. Hence, the image can be segmented by assigning each voxel to the label where the random walks most likely started from. Due to the high scalability of random walks, this approach is suitable for high throughput measurements. Additionally, we describe an interactively adjusted active contours slice by slice method considering local information, where we start with one manually labeled slice and move forward in any direction. This approach is superior with respect to accuracy towards the diffusion algorithm but inferior in the amount of tedious manual processing steps. The methods were applied on 3D and 4D datasets and evaluated by means of manually labeled images obtained in a realistic scenario with biologists.

  1. A novel Iterative algorithm to text segmentation for web born-digital images

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen

    2015-07-01

    Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.

  2. An Iris Segmentation Algorithm based on Edge Orientation for Off-angle Iris Recognition

    SciTech Connect

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J; Boehnen, Chris Bensing

    2013-01-01

    Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texture etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.

  3. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  4. An iris segmentation algorithm based on edge orientation for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Boehnen, Christopher

    2013-03-01

    Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texture etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.

  5. Anatomy of a hash-based long read sequence mapping algorithm for next generation DNA sequencing.

    PubMed

    Misra, Sanchit; Agrawal, Ankit; Liao, Wei-keng; Choudhary, Alok

    2011-01-15

    Recently, a number of programs have been proposed for mapping short reads to a reference genome. Many of them are heavily optimized for short-read mapping and hence are very efficient for shorter queries, but that makes them inefficient or not applicable for reads longer than 200 bp. However, many sequencers are already generating longer reads and more are expected to follow. For long read sequence mapping, there are limited options; BLAT, SSAHA2, FANGS and BWA-SW are among the popular ones. However, resequencing and personalized medicine need much faster software to map these long sequencing reads to a reference genome to identify SNPs or rare transcripts. We present AGILE (AliGnIng Long rEads), a hash table based high-throughput sequence mapping algorithm for longer 454 reads that uses diagonal multiple seed-match criteria, customized q-gram filtering and a dynamic incremental search approach among other heuristics to optimize every step of the mapping process. In our experiments, we observe that AGILE is more accurate than BLAT, and comparable to BWA-SW and SSAHA2. For practical error rates (< 5%) and read lengths (200-1000 bp), AGILE is significantly faster than BLAT, SSAHA2 and BWA-SW. Even for the other cases, AGILE is comparable to BWA-SW and several times faster than BLAT and SSAHA2. http://www.ece.northwestern.edu/~smi539/agile.html.

  6. Classification algorithms based on anterior segment optical coherence tomography measurements for detection of angle closure.

    PubMed

    Nongpiur, Monisha E; Haaland, Benjamin A; Friedman, David S; Perera, Shamira A; He, Mingguang; Foo, Li-Lian; Baskaran, Mani; Sakata, Lisandro M; Wong, Tien Y; Aung, Tin

    2013-01-01

    A recent study found that a combination of 6 anterior segment optical coherence tomography (ASOCT) parameters (anterior chamber area, volume, and width [ACA, ACV, ACW], lens vault [LV], iris thickness at 750 μm from the scleral spur, and iris cross-sectional area) explain >80% of the variability in angle width. The aim of this study was to evaluate classification algorithms based on ASOCT measurements for the detection of gonioscopic angle closure. Cross-sectional study. We included 2047 subjects aged ≥50 years. Participants underwent gonioscopy and ASOCT (Carl Zeiss Meditec, Dublin, CA). Customized software (Zhongshan Angle Assessment Program, Guangzhou, China) was used to measure ASOCT parameters in horizontal ASOCT scans. Six classification algorithms were considered (stepwise logistic regression with Akaike information criterion, Random Forest, multivariate adaptive regression splines, support vector machine, naïve Bayes' classification, and recursive partitioning). The ASOCT-derived parameters were incorporated to generate point and interval estimates of the area under the receiver operating characteristic (AUC) curves for these algorithms using 10-fold cross-validation as well as 50:50 training and validation. We assessed ASOCT measurements and angle closure. Data on 1368 subjects, including 295 (21.6%) subjects with gonioscopic angle closure were available for analysis. The mean (±standard deviation) age was 62.4±7.5 years and 54.8% were females. Angle closure subjects were older and had smaller ACW, ACA, and ACV; greater LV; and thicker irides (P<0.001 for all). For both, the 10-fold cross-validation and the 50:50 training and validation methods, stepwise logistic regression was the best algorithm for detecting eyes with gonioscopic angle closure with testing set AUC of 0.954 (95% confidence interval [CI], 0.942-0.966) and 0.962 (95% CI, 0.948-0.975) respectively, whereas recursive partitioning had relatively the poorest performance with testing set

  7. PHEW: a parallel segmentation algorithm for three-dimensional AMR datasets. Application to structure detection in self-gravitating flows

    NASA Astrophysics Data System (ADS)

    Bleuler, Andreas; Teyssier, Romain; Carassou, Sébastien; Martizzi, Davide

    2015-06-01

    We introduce phew ( Parallel Hi Erarchical Watershed), a new segmentation algorithm to detect structures in astrophysical fluid simulations, and its implementation into the adaptive mesh refinement (AMR) code ramses. phew works on the density field defined on the adaptive mesh, and can thus be used on the gas density or the dark matter density after a projection of the particles onto the grid. The algorithm is based on a `watershed' segmentation of the computational volume into dense regions, followed by a merging of the segmented patches based on the saddle point topology of the density field. phew is capable of automatically detecting connected regions above the adopted density threshold, as well as the entire set of substructures within. Our algorithm is fully parallel and uses the MPI library. We describe in great detail the parallel algorithm and perform a scaling experiment which proves the capability of phew to run efficiently on massively parallel systems. Future work will add a particle unbinding procedure and the calculation of halo properties onto our segmentation algorithm, thus expanding the scope of phew to genuine halo finding.

  8. An algorithm to increase speech intelligibility for hearing-impaired listeners in novel segments of the same noise type

    PubMed Central

    Healy, Eric W.; Yoho, Sarah E.; Chen, Jitong; Wang, Yuxuan; Wang, DeLiang

    2015-01-01

    Machine learning algorithms to segregate speech from background noise hold considerable promise for alleviating limitations associated with hearing impairment. One of the most important considerations for implementing these algorithms into devices such as hearing aids and cochlear implants involves their ability to generalize to conditions not employed during the training stage. A major challenge involves the generalization to novel noise segments. In the current study, sentences were segregated from multi-talker babble and from cafeteria noise using an algorithm that employs deep neural networks to estimate the ideal ratio mask. Importantly, the algorithm was trained on segments of noise and tested using entirely novel segments of the same nonstationary noise type. Substantial sentence-intelligibility benefit was observed for hearing-impaired listeners in both noise types, despite the use of unseen noise segments during the test stage. Interestingly, normal-hearing listeners displayed benefit in babble but not in cafeteria noise. This result highlights the importance of evaluating these algorithms not only in human subjects, but in members of the actual target population. PMID:26428803

  9. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  10. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  11. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  12. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  13. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  14. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  15. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    SciTech Connect

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J

    2014-06-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  16. A reproducible automated segmentation algorithm for corneal epithelium cell images from in vivo laser scanning confocal microscopy.

    PubMed

    Bullet, Julien; Gaujoux, Thomas; Borderie, Vincent; Bloch, Isabelle; Laroche, Laurent

    2014-06-01

    To evaluate an automated process to find borders of corneal basal epithelial cells in pictures obtained from in vivo laser scanning confocal microscopy (Heidelberg Retina Tomograph III with Rostock corneal module). On a sample of 20 normal corneal epithelial pictures, images were segmented through an automated four-step segmentation algorithm. Steps of the algorithm included noise reduction through a fast Fourier transform (FFT) band-pass filter, image binarization with a mean value threshold, watershed segmentation algorithm on distance map to separate fused cells and Voronoi diagram segmentation algorithm (which gives a final mask of cell borders). Cells were then automatically counted using this border mask. On the original image either with contrast enhancement or noise reduction, cells were manually counted by a trained operator. The average cell density was 7722.5 cells/mm(2) as assessed by automated analysis and 7732.5 cells/mm(2) as assessed by manual analysis (p = 0.93). Correlation between automated and manual analysis was strong (r = 0.974 [0.934-0.990], p < 0.001). Bland-Altman method gives a mean difference in density of 10 cells/mm(2) and a limits of agreement ranging from -971 to +991 cells/mm(2) . Visually, the algorithm correctly found almost all borders. This automated segmentation algorithm is worth for assessing corneal epithelial basal cell density and morphometry. This procedure is fully reproducible, with no operator-induced variability. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  17. Pharynx Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Pharynx Anatomy Add to My Pictures View /Download : Small: 720x576 ... View Download Large: 3000x2400 View Download Title: Pharynx Anatomy Description: Anatomy of the pharynx; drawing shows the ...

  18. Vulva Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Vulva Anatomy Add to My Pictures View /Download : Small: 720x634 ... View Download Large: 3000x2640 View Download Title: Vulva Anatomy Description: Anatomy of the vulva; drawing shows the ...

  19. Larynx Anatomy

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Larynx Anatomy Add to My Pictures View /Download : Small: ... 1350x1200 View Download Large: 2700x2400 View Download Title: Larynx Anatomy Description: Anatomy of the larynx; drawing shows ...

  20. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms

    PubMed Central

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

    2013-01-01

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  1. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms.

    PubMed

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly A

    2013-02-15

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Spatial Fuzzy C Means and Expectation Maximization Algorithms with Bias Correction for Segmentation of MR Brain Images.

    PubMed

    Meena Prakash, R; Shantha Selva Kumari, R

    2017-01-01

    The Fuzzy C Means (FCM) and Expectation Maximization (EM) algorithms are the most prevalent methods for automatic segmentation of MR brain images into three classes Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The major difficulties associated with these conventional methods for MR brain image segmentation are the Intensity Non-uniformity (INU) and noise. In this paper, EM and FCM with spatial information and bias correction are proposed to overcome these effects. The spatial information is incorporated by convolving the posterior probability during E-Step of the EM algorithm with mean filter. Also, a method of pixel re-labeling is included to improve the segmentation accuracy. The proposed method is validated by extensive experiments on both simulated and real brain images from standard database. Quantitative and qualitative results depict that the method is superior to the conventional methods by around 25% and over the state-of-the art method by 8%.

  3. Evaluation of an automatic segmentation algorithm for definition of head and neck organs at risk.

    PubMed

    Thomson, David; Boylan, Chris; Liptrot, Tom; Aitkenhead, Adam; Lee, Lip; Yap, Beng; Sykes, Andrew; Rowbottom, Carl; Slevin, Nicholas

    2014-08-03

    The accurate definition of organs at risk (OARs) is required to fully exploit the benefits of intensity-modulated radiotherapy (IMRT) for head and neck cancer. However, manual delineation is time-consuming and there is considerable inter-observer variability. This is pertinent as function-sparing and adaptive IMRT have increased the number and frequency of delineation of OARs. We evaluated accuracy and potential time-saving of Smart Probabilistic Image Contouring Engine (SPICE) automatic segmentation to define OARs for salivary-, swallowing- and cochlea-sparing IMRT. Five clinicians recorded the time to delineate five organs at risk (parotid glands, submandibular glands, larynx, pharyngeal constrictor muscles and cochleae) for each of 10 CT scans. SPICE was then used to define these structures. The acceptability of SPICE contours was initially determined by visual inspection and the total time to modify them recorded per scan. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm created a reference standard from all clinician contours. Clinician, SPICE and modified contours were compared against STAPLE by the Dice similarity coefficient (DSC) and mean/maximum distance to agreement (DTA). For all investigated structures, SPICE contours were less accurate than manual contours. However, for parotid/submandibular glands they were acceptable (median DSC: 0.79/0.80; mean, maximum DTA: 1.5 mm, 14.8 mm/0.6 mm, 5.7 mm). Modified SPICE contours were also less accurate than manual contours. The utilisation of SPICE did not result in time-saving/improve efficiency. Improvements in accuracy of automatic segmentation for head and neck OARs would be worthwhile and are required before its routine clinical implementation.

  4. A modified possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image.

    PubMed

    Ji, Ze-Xuan; Sun, Quan-Sen; Xia, De-Shen

    2011-07-01

    A modified possibilistic fuzzy c-means clustering algorithm is presented for fuzzy segmentation of magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities and noise. By introducing a novel adaptive method to compute the weights of local spatial in the objective function, the new adaptive fuzzy clustering algorithm is capable of utilizing local contextual information to impose local spatial continuity, thus allowing the suppression of noise and helping to resolve classification ambiguity. To estimate the intensity inhomogeneity, the global intensity is introduced into the coherent local intensity clustering algorithm and takes the local and global intensity information into account. The segmentation target therefore is driven by two forces to smooth the derived optimal bias field and improve the accuracy of the segmentation task. The proposed method has been successfully applied to 3 T, 7 T, synthetic and real MR images with desirable results. Comparisons with other approaches demonstrate the superior performance of the proposed algorithm. Moreover, the proposed algorithm is robust to initialization, thereby allowing fully automatic applications.

  5. Phasing the segments of the Keck and Thirty Meter Telescopes via the narrowband phasing algorithm: chromatic effects

    NASA Astrophysics Data System (ADS)

    Chanan, Gary; Troy, Mitchell; Raouf, Nasrat

    2016-07-01

    The narrowband phasing algorithm that was originally developed at Keck has largely been replaced by a broad- band algorithm that, although it is slower and less accurate than the former, has proved to be much more robust. A systematic investigation into the lack of robustness of the narrowband algorithm has shown that it results from systematic errors (of order 20 nm) that are wavelength-dependent. These errors are not well-understood at present, but they do not appear to arise from instrumental effects in the Keck phasing cameras, or from the segment coatings. This leaves high spatial frequency aberrations or scattering within 60 mm of the segment edges as the most likely origin of the effect.

  6. Metal Artifact Reduction and Segmentation of Dental Computerized Tomography Images Using Least Square Support Vector Machine and Mean Shift Algorithm.

    PubMed

    Mortaheb, Parinaz; Rezaeian, Mehdi

    2016-01-01

    Segmentation and three-dimensional (3D) visualization of teeth in dental computerized tomography (CT) images are of dentists' requirements for both abnormalities diagnosis and the treatments such as dental implant and orthodontic planning. On the other hand, dental CT image segmentation is a difficult process because of the specific characteristics of the tooth's structure. This paper presents a method for automatic segmentation of dental CT images. We present a multi-step method, which starts with a preprocessing phase to reduce the metal artifact using the least square support vector machine. Integral intensity profile is then applied to detect each tooth's region candidates. Finally, the mean shift algorithm is used to partition the region of each tooth, and all these segmented slices are then applied for 3D visualization of teeth. Examining the performance of our proposed approach, a set of reliable assessment metrics is utilized. We applied the segmentation method on 14 cone-beam CT datasets. Functionality analysis of the proposed method demonstrated precise segmentation results on different sample slices. Accuracy analysis of the proposed method indicates that we can increase the sensitivity, specificity, precision, and accuracy of the segmentation results by 83.24%, 98.35%, 72.77%, and 97.62% and decrease the error rate by 2.34%. The experimental results show that the proposed approach performs well on different types of CT images and has better performance than all existing approaches. Moreover, segmentation results can be more accurate by using the proposed algorithm of metal artifact reduction in the preprocessing phase.

  7. Research on remote sensing image segmentation based on ant colony algorithm: take the land cover classification of middle Qinling Mountains for example

    NASA Astrophysics Data System (ADS)

    Mei, Xin; Wang, Qian; Wang, Quanfang; Lin, Wenfang

    2009-10-01

    Remote sensing image based on the complexity of the background features, has a wealth of spatial information, how to extract huge amounts of data in the region of interest is a serious problem. Image segmentation refers to certain provisions in accordance with the characteristics of the image into different regions, and it is the key of remote sensing image recognition and information extraction. Reasonably fast image segmentation algorithm is the base of image processing; traditional segmentation methods have a lot of the limitations. Traditional threshold segmentation method in essence is an ergodic process, the low efficiency impacts on its application. The ant colony algorithm is a populationbased evolutionary algorithm heuristic biomimetic, since proposed, it has been successfully applied to the TSP, job-shop scheduling problem, network routing problem, vehicle routing problem, as well as other cluster analysis. Ant colony optimization algorithm is a fast heuristic optimization algorithm, easily integrates with other methods, and it is robust. Improved ant colony algorithm can greatly enhance the speed of image segmentation, while reducing the noise on the image. The research background of this paper is land cover classification experiments according to the SPOT images of Qinling area. The image segmentation based on ant colony algorithm is carried out and compared with traditional methods. Experimental results show that improved the ant colony algorithm can quickly and accurately segment target, and it is an effective method of image segmentation, it also has laid a good foundation of image classification for the follow-up work.

  8. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm.

    PubMed

    Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Aviña-Cervantes, Juan Gabriel; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (Az ) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with Az = 0.9502 over a training set of 40 images and Az = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms.

  9. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    PubMed Central

    Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (Az) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with Az = 0.9502 over a training set of 40 images and Az = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422

  10. A region segmentation based algorithm for building a crystal position lookup table in a scintillation detector

    NASA Astrophysics Data System (ADS)

    Wang, Hai-Peng; Yun, Ming-Kai; Liu, Shuang-Quan; Fan, Xin; Cao, Xue-Xiang; Chai, Pei; Shan, Bao-Ci

    2015-03-01

    In a scintillation detector, scintillation crystals are typically made into a 2-dimensional modular array. The location of incident gamma-ray needs be calibrated due to spatial response nonlinearity. Generally, position histograms-the characteristic flood response of scintillation detectors-are used for position calibration. In this paper, a position calibration method based on a crystal position lookup table which maps the inaccurate location calculated by Anger logic to the exact hitting crystal position has been proposed. Firstly, the position histogram is preprocessed, such as noise reduction and image enhancement. Then the processed position histogram is segmented into disconnected regions, and crystal marking points are labeled by finding the centroids of regions. Finally, crystal boundaries are determined and the crystal position lookup table is generated. The scheme is evaluated by the whole-body positron emission tomography (PET) scanner and breast dedicated single photon emission computed tomography scanner developed by the Institute of High Energy Physics, Chinese Academy of Sciences. The results demonstrate that the algorithm is accurate, efficient, robust and applicable to any configurations of scintillation detector. Supported by National Natural Science Foundation of China (81101175) and XIE Jia-Lin Foundation of Institute of High Energy Physics (Y3546360U2)

  11. Local Area Signal-to-Noise Ratio (LASNR) algorithm for Image Segmentation

    SciTech Connect

    Kegelmeyer, L; Fong, P; Glenn, S; Liebman, J

    2007-07-03

    Many automated image-based applications have need of finding small spots in a variably noisy image. For humans, it is relatively easy to distinguish objects from local surroundings no matter what else may be in the image. We attempt to capture this distinguishing capability computationally by calculating a measurement that estimates the strength of signal within an object versus the noise in its local neighborhood. First, we hypothesize various sizes for the object and corresponding background areas. Then, we compute the Local Area Signal to Noise Ratio (LASNR) at every pixel in the image, resulting in a new image with LASNR values for each pixel. All pixels exceeding a pre-selected LASNR value become seed pixels, or initiation points, and are grown to include the full area extent of the object. Since growing the seed is a separate operation from finding the seed, each object can be any size and shape. Thus, the overall process is a 2-stage segmentation method that first finds object seeds and then grows them to find the full extent of the object. This algorithm was designed, optimized and is in daily use for the accurate and rapid inspection of optics from a large laser system (National Ignition Facility (NIF), Lawrence Livermore National Laboratory, Livermore, CA), which includes images with background noise, ghost reflections, different illumination and other sources of variation.

  12. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  13. Fast algorithm for region snake-based segmentation adapted to physical noise models and application to object tracking

    NASA Astrophysics Data System (ADS)

    Chesnaud, Christophe; Refregier, Philippe

    1999-06-01

    Algorithms for object segmentation are crucial in many image processing applications. During past years, active contour models have been widely used for finding the contours of objects. This segmentation strategy is classically edge based in the sense that the snake is driven to fit the maximum of an edge map of the scene. We have recently proposed a region-based snake approach, that can be implemented using a fast algorithm , to segment an object in an image. The algorithms, optimal in the Maximum Likelihood sense, are based on the calculus of the statistics of the inner and the outer regions and can thus be adapted to different kinds of random fields which can describe the input image. In this paper out aim is to study this approach for tracking application in optronic images. We first show the relevance of using a priori information on the statistical laws of the input image in the case of Gaussian statistics which are well adapted to describe optronic images when a whitening preprocessing is used. We will then characterize the performance of the fast algorithm implementation of the used approach and we will apply it to tracking applications. The efficiency of the proposed method will be shown on real image sequences.

  14. Consensus embedding: theory, algorithms and application to segmentation and classification of biomedical data.

    PubMed

    Viswanath, Satish; Madabhushi, Anant

    2012-02-08

    and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis.

  15. Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images.

    PubMed

    Karim, Rashed; Bhagirath, Pranav; Claus, Piet; James Housden, R; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal

    2016-05-01

    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Improving performance of computer-aided detection of pulmonary embolisms by incorporating a new pulmonary vascular-tree segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Song, XiaoFei; Chapman, Brian E.; Zheng, Bin

    2012-03-01

    We developed a new pulmonary vascular tree segmentation/extraction algorithm. The purpose of this study was to assess whether adding this new algorithm to our previously developed computer-aided detection (CAD) scheme of pulmonary embolism (PE) could improve the CAD performance (in particular reducing false positive detection rates). A dataset containing 12 CT examinations with 384 verified pulmonary embolism regions associated with 24 threedimensional (3-D) PE lesions was selected in this study. Our new CAD scheme includes the following image processing and feature classification steps. (1) A 3-D based region growing process followed by a rolling-ball algorithm was utilized to segment lung areas. (2) The complete pulmonary vascular trees were extracted by combining two approaches of using an intensity-based region growing to extract the larger vessels and a vessel enhancement filtering to extract the smaller vessel structures. (3) A toboggan algorithm was implemented to identify suspicious PE candidates in segmented lung or vessel area. (4) A three layer artificial neural network (ANN) with the topology 27-10-1 was developed to reduce false positive detections. (5) A k-nearest neighbor (KNN) classifier optimized by a genetic algorithm was used to compute detection scores for the PE candidates. (6) A grouping scoring method was designed to detect the final PE lesions in three dimensions. The study showed that integrating the pulmonary vascular tree extraction algorithm into the CAD scheme reduced false positive rates by 16.2%. For the case based 3D PE lesion detecting results, the integrated CAD scheme achieved 62.5% detection sensitivity with 17.1 false-positive lesions per examination.

  17. Electron Conformal Radiotherapy for Post-Mastectomy Irradiation: A Bolus-Free, Multi-Energy, Multi-Segmented Field Algorithm

    DTIC Science & Technology

    2005-08-01

    that compared to customized electron bolu s radiotherapy for post-mastectomy irradiation, ECT with multi-energy, multi-segmente d treatment fields has...PTV dos e homogeneity was quite good . Use of the treatment plan modification techniques improved dose sparin g for the non-target portion of the...phantom . For the patient treatment plans, the algorithm provided acceptable results for PTV conformality and dose homogeneity, in comparison to the bolus

  18. An Automatic Algorithm for Segmentation of the Boundaries of Corneal Layers in Optical Coherence Tomography Images using Gaussian Mixture Model

    PubMed Central

    Jahromi, Mahdi Kazemian; Kafieh, Raheleh; Rabbani, Hossein; Dehnavi, Alireza Mehri; Peyman, Alireza; Hajizadeh, Fedra; Ommani, Mohammadreza

    2014-01-01

    Diagnosis of corneal diseases is possible by measuring and evaluation of corneal thickness in different layers. Thus, the need for precise segmentation of corneal layer boundaries is inevitable. Obviously, manual segmentation is time-consuming and imprecise. In this paper, the Gaussian mixture model (GMM) is used for automatic segmentation of three clinically important corneal boundaries on optical coherence tomography (OCT) images. For this purpose, we apply the GMM method in two consequent steps. In the first step, the GMM is applied on the original image to localize the first and the last boundaries. In the next step, gradient response of a contrast enhanced version of the image is fed into another GMM algorithm to obtain a more clear result around the second boundary. Finally, the first boundary is traced toward down to localize the exact location of the second boundary. We tested the performance of the algorithm on images taken from a Heidelberg OCT imaging system. To evaluate our approach, the automatic boundary results are compared with the boundaries that have been segmented manually by two corneal specialists. The quantitative results show that the proposed method segments the desired boundaries with a great accuracy. Unsigned mean errors between the results of the proposed method and the manual segmentation are 0.332, 0.421, and 0.795 for detection of epithelium, Bowman, and endothelium boundaries, respectively. Unsigned mean errors of the inter-observer between two corneal specialists have also a comparable unsigned value of 0.330, 0.398, and 0.534, respectively. PMID:25298926

  19. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  20. Memory based active contour algorithm using pixel-level classified images for colon crypt segmentation.

    PubMed

    Cohen, Assaf; Rivlin, Ehud; Shimshoni, Ilan; Sabo, Edmond

    2015-07-01

    In this paper, we introduce a novel method for detection and segmentation of crypts in colon biopsies. Most of the approaches proposed in the literature try to segment the crypts using only the biopsy image without understanding the meaning of each pixel. The proposed method differs in that we segment the crypts using an automatically generated pixel-level classification image of the original biopsy image and handle the artifacts due to the sectioning process and variance in color, shape and size of the crypts. The biopsy image pixels are classified to nuclei, immune system, lumen, cytoplasm, stroma and goblet cells. The crypts are then segmented using a novel active contour approach, where the external force is determined by the semantics of each pixel and the model of the crypt. The active contour is applied for every lumen candidate detected using the pixel-level classification. Finally, a false positive crypt elimination process is performed to remove segmentation errors. This is done by measuring their adherence to the crypt model using the pixel level classification results. The method was tested on 54 biopsy images containing 4944 healthy and 2236 cancerous crypts, resulting in 87% detection of the crypts with 9% of false positive segments (segments that do not represent a crypt). The segmentation accuracy of the true positive segments is 96%.

  1. Evaluation of an algorithm for semiautomated segmentation of thin tissue layers in high-frequency ultrasound images.

    PubMed

    Qiu, Qiang; Dunmore-Buyze, Joy; Boughner, Derek R; Lacefield, James C

    2006-02-01

    An algorithm consisting of speckle reduction by median filtering, contrast enhancement using top- and bottom-hat morphological filters, and segmentation with a discrete dynamic contour (DDC) model was implemented for nondestructive measurements of soft tissue layer thickness. Algorithm performance was evaluated by segmenting simulated images of three-layer phantoms and high-frequency (40 MHz) ultrasound images of porcine aortic valve cusps in vitro. The simulations demonstrated the necessity of the median and morphological filtering steps and enabled testing of user-specified parameters of the morphological filters and DDC model. In the experiments, six cusps were imaged in coronary perfusion solution (CPS) then in distilled water to test the algorithm's sensitivity to changes in the dimensions of thin tissue layers. Significant increases in the thickness of the fibrosa, spongiosa, and ventricularis layers, by 53.5% (p < 0.001), 88.5% (p < 0.001), and 35.1% (p = 0.033), respectively, were observed when the specimens were submerged in water. The intraobserver coefficient of variation of repeated thickness estimates ranged from 0.044 for the fibrosa in water to 0.164 for the spongiosa in CPS. Segmentation accuracy and variability depended on the thickness and contrast of the layers, but the modest variability provides confidence in the thickness measurements.

  2. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  3. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  4. A Fast Superpixel Segmentation Algorithm for PolSAR Images Based on Edge Refinement and Revised Wishart Distance

    PubMed Central

    Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385

  5. Development, Implementation and Evaluation of Segmentation Algorithms for the Automatic Classification of Cervical Cells

    NASA Astrophysics Data System (ADS)

    Macaulay, Calum Eric

    Cancer of the uterine cervix is one of the most common cancers in women. An effective screening program for pre-cancerous and cancerous lesions can dramatically reduce the mortality rate for this disease. In British Columbia where such a screening program has been in place for some time, 2500 to 3000 slides of cervical smears need to be examined daily. More than 35 years ago, it was recognized that an automated pre-screening system could greatly assist people in this task. Such a system would need to find and recognize stained cells, segment the images of these cells into nucleus and cytoplasm, numerically describe the characteristics of the cells, and use these features to discriminate between normal and abnormal cells. The thrust of this work was (1) to research and develop new segmentation methods and compare their performance to those in the literature, (2) to determine dependence of the numerical cell descriptors on the segmentation method used, (3) to determine the dependence of cell classification accuracy on the segmentation used, and (4) to test the hypothesis that using numerical cell descriptors one can correctly classify the cells. The segmentation accuracies of 32 different segmentation procedures were examined. It was found that the best nuclear segmentation procedure was able to correctly segment 98% of the nuclei of a 1000 and a 3680 image database. Similarly the best cytoplasmic segmentation procedure was found to correctly segment 98.5% of the cytoplasm of the same 1000 image database. Sixty-seven different numerical cell descriptors (features) were calculated for every segmented cell. On a database of 800 classified cervical cells these features when used in a linear discriminant function analysis could correctly classify 98.7% of the normal cells and 97.0% of the abnormal cells. While some features were found to vary a great deal between segmentation procedures, the classification accuracy of groups of features was found to be independent of the

  6. Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation.

    PubMed

    Rabbani, Hossein; Kafieh, Rahele; Kazemian Jahromi, Mahdi; Jorjandi, Sahar; Mehri Dehnavi, Alireza; Hajizadeh, Fedra; Peyman, Alireza

    2016-01-01

    Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy.

  7. Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation

    PubMed Central

    Rabbani, Hossein; Kazemian Jahromi, Mahdi; Jorjandi, Sahar; Mehri Dehnavi, Alireza; Hajizadeh, Fedra; Peyman, Alireza

    2016-01-01

    Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy. PMID:27247559

  8. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  9. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems.

    PubMed

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.

  10. Correlative anatomy for thoracic inlet; glottis and subglottis; trachea, carina, and main bronchi; lobes, fissures, and segments; hilum and pulmonary vascular system; bronchial arteries and lymphatics.

    PubMed

    Ugalde, Paula; Miro, Santiago; Fréchette, Eric; Deslauriers, Jean

    2007-11-01

    Because it is relatively inexpensive and universally available, standard radiographs of the thorax should still be viewed as the primary screening technique to look at the anatomy of intrathoracic structures and to investigate airway or pulmonary disorders. Modern trained thoracic surgeons must be able to correlate surgical anatomy with what is seen on more advanced imaging techniques, however, such as CT or MRI. More importantly, they must be able to recognize the indications, capabilities, limitations, and pitfalls of these imaging methods.

  11. Paraganglioma Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Paraganglioma Anatomy Add to My Pictures View /Download : Small: 648x576 ... View Download Large: 2700x2400 View Download Title: Paraganglioma Anatomy Description: Paraganglioma of the head and neck; drawing ...

  12. Eye Anatomy

    MedlinePlus

    ... News About Us Donate In This Section Eye Anatomy en Español email Send this article to a ... You at Risk For Glaucoma? Childhood Glaucoma Eye Anatomy Five Common Glaucoma Tests Glaucoma Facts and Stats ...

  13. Tooth anatomy

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/002214.htm Tooth anatomy To use the sharing features on this page, ... upper jawbone is called the maxilla. Images Tooth anatomy References Lingen MW. Head and neck. In: Kumar ...

  14. Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework.

    PubMed

    Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel

    2012-09-01

    In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.

  15. A fully-automatic locally adaptive thresholding algorithm for blood vessel segmentation in 3D digital subtraction angiography.

    PubMed

    Boegel, Marco; Hoelter, Philip; Redel, Thomas; Maier, Andreas; Hornegger, Joachim; Doerfler, Arnd

    2015-01-01

    Subarachnoid hemorrhage due to a ruptured cerebral aneurysm is still a devastating disease. Planning of endovascular aneurysm therapy is increasingly based on hemodynamic simulations necessitating reliable vessel segmentation and accurate assessment of vessel diameters. In this work, we propose a fully-automatic, locally adaptive, gradient-based thresholding algorithm. Our approach consists of two steps. First, we estimate the parameters of a global thresholding algorithm using an iterative process. Then, a locally adaptive version of the approach is applied using the estimated parameters. We evaluated both methods on 8 clinical 3D DSA cases. Additionally, we propose a way to select a reference segmentation based on 2D DSA measurements. For large vessels such as the internal carotid artery, our results show very high sensitivity (97.4%), precision (98.7%) and Dice-coefficient (98.0%) with our reference segmentation. Similar results (sensitivity: 95.7%, precision: 88.9% and Dice-coefficient: 90.7%) are achieved for smaller vessels of approximately 1mm diameter.

  16. Sparse appearance model-based algorithm for automatic segmentation and identification of articulated hand bones

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Peng, Zhigang; Liao, Shu; Shinagawa, Yoshihisa; Zhan, Yiqiang; Hermosillo, Gerardo; Zhou, Xiang Sean

    2014-03-01

    Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.

  17. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  18. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images.

  19. Anatomy atlases.

    PubMed

    Rosse, C

    1999-01-01

    Anatomy atlases are unlike other knowledge sources in the health sciences in that they communicate knowledge through annotated images without the support of narrative text. An analysis of the knowledge component represented by images and the history of anatomy atlases suggest some distinctions that should be made between atlas and textbook illustrations. Textbook and atlas should synergistically promote the generation of a mental model of anatomy. The objective of such a model is to support anatomical reasoning and thereby replace memorization of anatomical facts. Criteria are suggested for selecting anatomy texts and atlases that complement one another, and the advantages and disadvantages of hard copy and computer-based anatomy atlases are considered.

  20. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-09-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  1. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data

    NASA Astrophysics Data System (ADS)

    Spiegel, M.; Redel, T.; Struffert, T.; Hornegger, J.; Doerfler, A.

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling.

  2. Comparative evaluation of a novel 3D segmentation algorithm on in-treatment radiotherapy cone beam CT images

    NASA Astrophysics Data System (ADS)

    Price, Gareth; Moore, Chris

    2007-03-01

    Image segmentation and delineation is at the heart of modern radiotherapy, where the aim is to deliver as high a radiation dose as possible to a cancerous target whilst sparing the surrounding healthy tissues. This, of course, requires that a radiation oncologist dictates both where the tumour and any nearby critical organs are located. As well as in treatment planning, delineation is of vital importance in image guided radiotherapy (IGRT): organ motion studies demand that features across image databases are accurately segmented, whilst if on-line adaptive IGRT is to become a reality, speedy and correct target identification is a necessity. Recently, much work has been put into the development of automatic and semi-automatic segmentation tools, often using prior knowledge to constrain some grey level, or derivative thereof, interrogation algorithm. It is hoped that such techniques can be applied to organ at risk and tumour segmentation in radiotherapy. In this work, however, we make the assumption that grey levels do not necessarily determine a tumour's extent, especially in CT where the attenuation coefficient can often vary little between cancerous and normal tissue. In this context we present an algorithm that generates a discontinuity free delineation surface driven by user placed, evidence based support points. In regions of sparse user supplied information, prior knowledge, in the form of a statistical shape model, provides guidance. A small case study is used to illustrate the method. Multiple observers (between 3 and 7) used both the presented tool and a commercial manual contouring package to delineate the bladder on a serially imaged (10 cone beam CT volumes ) prostate patient. A previously presented shape analysis technique is used to quantitatively compare the observer variability.

  3. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    PubMed

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  4. Phasing the mirror segments of the Keck telescopes: the broadband phasing algorithm.

    PubMed

    Chanan, G; Troy, M; Dekens, F; Michaels, S; Nelson, J; Mast, T; Kirkman, D

    1998-01-01

    To achieve its full diffraction limit in the infrared, the primary mirror of the Keck telescope (now telescopes) must be properly phased: The steps or piston errors between the individual mirror segments must be reduced to less than 100 nm. We accomplish this with a wave optics variation of the Shack-Hartmann test, in which the signal is not the centroid but rather the degree of coherence of the individual subimages. Using filters with a variety of coherence lengths, we can capture segments with initial piston errors as large as +/-30 microm and reduce these to 30 nm--a dynamic range of 3 orders of magnitude. Segment aberrations contribute substantially to the residual errors of approximately 75 nm.

  5. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds

    Treesearch

    Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth

    2017-01-01

    As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...

  6. Spatial Patterns of Trees from Airborne LiDAR Using a Simple Tree Segmentation Algorithm

    NASA Astrophysics Data System (ADS)

    Jeronimo, S.; Kane, V. R.; McGaughey, R. J.; Franklin, J. F.

    2015-12-01

    Objectives for management of forest ecosystems on public land incorporate a focus on maintenance and restoration of ecological functions through silvicultural manipulation of forest structure. The spatial pattern of residual trees - the horizontal element of structure - is a key component of ecological restoration prescriptions. We tested the ability of a simple LiDAR individual tree segmentation method - the watershed transform - to generate spatial pattern metrics similar to those obtained by the traditional method - ground-based stem mapping - on forested plots representing the structural diversity of a large wilderness area (Yosemite NP) and a large managed area (Sierra NF) in the Sierra Nevada, Calif. Most understory and intermediate-canopy trees were not detected by the LiDAR segmentation; however, LiDAR- and field-based assessments of spatial pattern in terms of tree clump size distributions largely agreed. This suggests that (1) even when individual tree segmentation is not effective for tree density estimates, it can provide a good measurement of tree spatial pattern, and (2) a simple segmentation method is adequate to measure spatial pattern of large areas with a diversity of structural characteristics. These results lay the groundwork for a LiDAR tool to assess clumping patterns across forest landscapes in support of restoration silviculture. This tool could describe spatial patterns of functionally intact reference ecosystems, measure departure from reference targets in treatment areas, and, with successive acquisitions, monitor treatment efficacy.

  7. Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing

    DTIC Science & Technology

    2013-09-01

    results for precision, recall, and F-measure indicate that the best approach to use for image segmentation is Sobel edge detection and to use Canny...or Sobel for object recognition. The process for this report would not work for a warfighter or analyst. It has poor performance. Additionally...1 2.1. Sobel Edge Detection

  8. Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm.

    PubMed

    Abdullah, Muhammad; Fraz, Muhammad Moazam; Barman, Sarah A

    2016-01-01

    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc.

  9. Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm

    PubMed Central

    Abdullah, Muhammad; Barman, Sarah A.

    2016-01-01

    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. PMID:27190713

  10. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

  11. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  12. Robust approximation of image illumination direction in a segmentation-based crater detection algorithm for spacecraft navigation

    NASA Astrophysics Data System (ADS)

    Maass, Bolko

    2016-12-01

    This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.

  13. Liver anatomy.

    PubMed

    Abdel-Misih, Sherif R Z; Bloomston, Mark

    2010-08-01

    Understanding the complexities of the liver has been a long-standing challenge to physicians and anatomists. Significant strides in the understanding of hepatic anatomy have facilitated major progress in liver-directed therapies--surgical interventions, such as transplantation, hepatic resection, hepatic artery infusion pumps, and hepatic ablation, and interventional radiologic procedures, such as transarterial chemoembolization, selective internal radiation therapy, and portal vein embolization. Without understanding hepatic anatomy, such progressive interventions would not be feasible. This article reviews the history, general anatomy, and the classification schemes of liver anatomy and their relevance to liver-directed therapies. Copyright 2010 Elsevier Inc. All rights reserved.

  14. Integer anatomy

    SciTech Connect

    Doolittle, R.

    1994-11-15

    The title integer anatomy is intended to convey the idea of a systematic method for displaying the prime decomposition of the integers. Just as the biological study of anatomy does not teach us all things about behavior of species neither would we expect to learn everything about the number theory from a study of its anatomy. But, some number-theoretic theorems are illustrated by inspection of integer anatomy, which tend to validate the underlying structure and the form as developed and displayed in this treatise. The first statement to be made in this development is: the way structure of the natural numbers is displayed depends upon the allowed operations.

  15. An automated image segmentation and classification algorithm for immunohistochemically stained tumor cell nuclei

    NASA Astrophysics Data System (ADS)

    Yeo, Hangu; Sheinin, Vadim; Sheinin, Yuri

    2009-02-01

    As medical image data sets are digitized and the number of data sets is increasing exponentially, there is a need for automated image processing and analysis technique. Most medical imaging methods require human visual inspection and manual measurement which are labor intensive and often produce inconsistent results. In this paper, we propose an automated image segmentation and classification method that identifies tumor cell nuclei in medical images and classifies these nuclei into two categories, stained and unstained tumor cell nuclei. The proposed method segments and labels individual tumor cell nuclei, separates nuclei clusters, and produces stained and unstained tumor cell nuclei counts. The representative fields of view have been chosen by a pathologist from a known diagnosis (clear cell renal cell carcinoma), and the automated results are compared with the hand-counted results by a pathologist.

  16. A sport scene images segmentation method based on edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Biqing

    2011-12-01

    This paper proposes a simple, fast sports scene image segmentation method; a lot of work so far has been looking for a way to reduce the different shades of emotions in smooth area. A novel method of pretreatment, proposed the elimination of different shades feelings. Internal filling mechanism is used to change the pixels enclosed by the interest as interest pixels. For some test has achieved harvest sports scene images has been confirmed.

  17. Applying the algorithm "assessing quality using image registration circuits" (AQUIRC) to multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Datteri, Ryan; Asman, Andrew J.; Landman, Bennett A.; Dawant, Benoit M.

    2014-03-01

    Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.

  18. A effective immune multi-objective algorithm for SAR imagery segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Dongdong; Jiao, Licheng; Gong, Maoguo; Si, Xiaoyun; Li, Jinji; Feng, Jie

    2009-10-01

    A novel and effective immune multi-objective clustering algorithm (IMCA) is presented in this study. Two conflicting and complementary objectives, called compactness and connectedness of clusters, are employed as optimization targets. Besides, adaptive ranks clone, variable length chromosome crossover operation and k-nearest neighboring list based diversity holding strategies are featured by the algorithm. IMCA could automatically discover the right number of clusters with large probability. Seven complicated artificial data sets and two widely used synthetic aperture radar (SAR) imageries are used for test IMCA. Compared with FCM and VGA, IMCA has obtained good and encouraging clustering results. We believe that IMCA is an effective algorithm for solving these nine problems, which should deserve further research.

  19. Intracranial Arteries - Anatomy and Collaterals.

    PubMed

    Liebeskind, David S; Caplan, Louis R

    2016-01-01

    Anatomy, physiology, and pathophysiology are inextricably linked in patients with intracranial atherosclerosis. Knowledge of abnormal or pathological conditions such as intracranial atherosclerosis stems from detailed recognition of the normal pattern of vascular anatomy. The vascular anatomy of the intracranial arteries, both at the level of the vessel wall and as a larger structure or conduit, is a reflection of physiology over time, from in utero stages through adult life. The unique characteristics of arteries at the base of the brain may help our understanding of atherosclerotic lesions that tend to afflict specific arterial segments. Although much of the knowledge regarding intracranial arteries originates from pathology and angiography series over several centuries, evolving noninvasive techniques have rapidly expanded our perspective. As each imaging modality provides a depiction that combines anatomy and flow physiology, it is important to interpret each image with a solid understanding of typical arterial anatomy and corresponding collateral routes. Compensatory collateral perfusion and downstream flow status have recently emerged as pivotal variables in the clinical management of patients with atherosclerosis. Ongoing studies that illustrate the anatomy and pathophysiology of these proximal arterial segments across modalities will help refine our knowledge of the interplay between vascular anatomy and cerebral blood flow. Future studies may help elucidate pivotal arterial factors far beyond the degree of stenosis, examining downstream influences on cerebral perfusion, artery-to-artery thromboembolic potential, amenability to endovascular therapies and stent conformation, and the propensity for restenosis due to biophysical factors. © 2016 S. Karger AG, Basel.

  20. Comparative assessment of segmentation algorithms for tumor delineation on a test-retest [(11)C]choline dataset.

    PubMed

    Tomasi, Giampaolo; Shepherd, Tony; Turkheimer, Federico; Visvikis, Dimitris; Aboagye, Eric

    2012-12-01

    Many methods have been proposed for tumor segmentation from positron emission tomography images. Because of the increasingly important role that [(11)C]choline is playing in oncology and because no study has compared segmentation methods on this tracer, the authors assessed several segmentation algorithms on a [(11)C]choline test-retest dataset. Fixed and adaptive threshold-based methods, fuzzy C-means (FCM), Canny's edge detection method, the watershed transform, and the fuzzy locally adaptive Bayesian algorithm (FLAB) were used. Test-retest [(11)C]choline scans of nine patients with breast cancer were considered and the percent test-retest variability %VAR(TEST-RETEST) of tumor volume (TV) was employed to assess the results. The same methods were then applied to two denoised datasets generated by applying either a Gaussian filter or the wavelet transform. The (semi)automated methods FCM, FLAB, and Canny emerged as the best ones in terms of TV reproducibility. For these methods, the %root mean square error %RMSE of %VAR(TEST-RETEST), defined as %RMSE= variance+mean(2), was in the range 10%-21.2%, depending on the dataset and algorithm. Threshold-based methods gave TV estimates which were extremely variable, particularly on the unsmoothed data; their performance improved on the denoised datasets, whereas smoothing did not have a remarkable impact on the (semi)automated methods. TV variability was comparable to that of SUV(MAX) and SUV(MEAN) (range 14.7%-21.9% for %RMSE of %VAR(TEST-RETEST), after the exclusion of one outlier, 40%-43% when the outlier was included). The TV variability obtained with the best methods was similar to the one reported for TV in previous [(18)F]FDG and [(18)F]FLT studies and to the one of SUV(MAX)∕SUV(MEAN) on the authors' [(11)C]choline dataset. The good reproducibility of [(11)C]choline TV warrants further studies to test whether TV could predict early response to treatment and survival, as for [(18)F]FDG, to complement

  1. Region growing segmentation of ultrasound images using gradients and local statistics

    NASA Astrophysics Data System (ADS)

    Mercado-Aguirre, Isabela M.; Patiño-Vanegas, Alberto; Contreras-Ortiz, Sonia H.

    2017-03-01

    This paper describes a region growing segmentation algorithm for medical ultrasound images. The algorithm starts with anisotropic diffusion filtering to reduce speckle noise without blurring the edges. Then, region growing is performed starting from a seed point, using a merging criterion that compares intensity gradients to the noise level inside the region. Finally, the boundaries are smoothed using morphological closing. The algorithm was evaluated with two simulated images and eleven phantom images and converged in 10 of them with accurate region delimitation. Preliminary results show that the proposed method can be used for ultrasound image segmentation and does not require previous knowledge of the anatomy of the structures.

  2. The backtracking search optimization algorithm for frequency band and time segment selection in motor imagery-based brain-computer interfaces.

    PubMed

    Wei, Zhonghai; Wei, Qingguo

    2016-09-01

    Common spatial pattern (CSP) is a powerful algorithm for extracting discriminative brain patterns in motor imagery-based brain-computer interfaces (BCIs). However, its performance depends largely on the subject-specific frequency band and time segment. Accurate selection of most responsive frequency band and time segment remains a crucial problem. A novel evolutionary algorithm, the backtracking search optimization algorithm is used to find the optimal frequency band and the optimal combination of frequency band and time segment. The former is searched by a frequency window with changing width of which starting and ending points are selected by the backtracking optimization algorithm; the latter is searched by the same frequency window and an additional time window with fixed width. The three parameters, the starting and ending points of frequency window and the starting point of time window, are jointly optimized by the backtracking search optimization algorithm. Based on the chosen frequency band and fixed or chosen time segment, the same feature extraction is conducted by CSP and subsequent classification is carried out by Fisher discriminant analysis. The classification error rate is used as the objective function of the backtracking search optimization algorithm. The two methods, named BSA-F CSP and BSA-FT CSP, were evaluated on data set of BCI competition and compared with traditional wideband (8-30[Formula: see text]Hz) CSP. The classification results showed that backtracking search optimization algorithm can find much effective frequency band for EEG preprocessing compared to traditional broadband, substantially enhancing CSP performance in terms of classification accuracy. On the other hand, the backtracking search optimization algorithm for joint selection of frequency band and time segment can find their optimal combination, and thus can further improve classification rates.

  3. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    DOE PAGES

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-06

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinatemore » system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.« less

  4. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    SciTech Connect

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-06

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinate system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.

  5. A new algorithm for segmentation of cardiac quiescent phases and cardiac time intervals using seismocardiography

    NASA Astrophysics Data System (ADS)

    Jafari Tadi, Mojtaba; Koivisto, Tero; Pänkäälä, Mikko; Paasio, Ari; Knuutila, Timo; Teräs, Mika; Hänninen, Pekka

    2015-03-01

    Systolic time intervals (STI) have significant diagnostic values for a clinical assessment of the left ventricle in adults. This study was conducted to explore the feasibility of using seismocardiography (SCG) to measure the systolic timings of the cardiac cycle accurately. An algorithm was developed for the automatic localization of the cardiac events (e.g. the opening and closing moments of the aortic and mitral valves). Synchronously acquired SCG and electrocardiography (ECG) enabled an accurate beat to beat estimation of the electromechanical systole (QS2), pre-ejection period (PEP) index and left ventricular ejection time (LVET) index. The performance of the algorithm was evaluated on a healthy test group with no evidence of cardiovascular disease (CVD). STI values were corrected based on Weissler's regression method in order to assess the correlation between the heart rate and STIs. One can see from the results that STIs correlate poorly with the heart rate (HR) on this test group. An algorithm was developed to visualize the quiescent phases of the cardiac cycle. A color map displaying the magnitude of SCG accelerations for multiple heartbeats visualizes the average cardiac motions and thereby helps to identify quiescent phases. High correlation between the heart rate and the duration of the cardiac quiescent phases was observed.

  6. An algorithm to parse segment packing in predicted protein contact maps.

    PubMed

    Taylor, William R

    2016-01-01

    The analysis of correlation in alignments generates a matrix of predicted contacts between positions in the structure and while these can arise for many reasons, the simplest explanation is that the pair of residues are in contact in a three-dimensional structure and are affecting each others selection pressure. To analyse these data, A dynamic programming algorithm was developed for parsing secondary structure interactions in predicted contact maps. The non-local nature of the constraints required an iterated approach (using a "frozen approximation") but with good starting definitions, a single pass was usually sufficient. The method was shown to be effective when applied to the transmembrane class of protein and error tolerant even when the signal becomes degraded. In the globular class of protein, where the extent of interactions are more limited and more complex, the algorithm still behaved well, classifying most of the important interactions correctly in both a small and a large test case. For the larger protein, this involved examples of the algorithm apportioning parts of a single large secondary structure element between two different interactions. It is expected that the method will be useful as a pre-processor to coarse-grained modelling methods to extend the range of protein tertiary structure prediction to larger proteins or to data that is currently too 'noisy' to be used by current residue-based methods.

  7. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    SciTech Connect

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-01

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinate system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.

  8. Reproducibility of SD-OCT–Based Ganglion Cell–Layer Thickness in Glaucoma Using Two Different Segmentation Algorithms

    PubMed Central

    Garvin, Mona K.; Lee, Kyungmoo; Burns, Trudy L.; Abràmoff, Michael D.; Sonka, Milan; Kwon, Young H.

    2013-01-01

    Purpose. To compare the reproducibility of spectral-domain optical coherence tomography (SD-OCT)–based ganglion cell–layer-plus-inner plexiform–layer (GCL+IPL) thickness measurements for glaucoma patients obtained using both a publicly available and a commercially available algorithm. Methods. Macula SD-OCT volumes (200 × 200 × 1024 voxels, 6 × 6 × 2 mm3) were obtained prospectively from both eyes of patients with open-angle glaucoma or with suspected glaucoma on two separate visits within 4 months. The combined GCL+IPL thickness was computed for each SD-OCT volume within an elliptical annulus centered at the fovea, based on two algorithms: (1) a previously published graph-theoretical layer segmentation approach developed at the University of Iowa, and (2) a ganglion cell analysis module of version 6 of Cirrus software. The mean overall thickness of the elliptical annulus was computed as was the thickness within six sectors. For statistical analyses, eyes with an SD-OCT volume with low signal strength (<6), image acquisition errors, or errors in performing the commercial GCL+IPL analysis in at least one of the repeated acquisitions were excluded. Results. Using 104 eyes (from 56 patients) with repeated measurements, we found the intraclass correlation coefficient for the overall elliptical annular GCL+IPL thickness to be 0.98 (95% confidence interval [CI]: 0.97–0.99) with the Iowa algorithm and 0.95 (95% CI: 0.93–0.97) with the Cirrus algorithm; the intervisit SDs were 1.55 μm (Iowa) and 2.45 μm (Cirrus); and the coefficients of variation were 2.2% (Iowa) and 3.5% (Cirrus), P < 0.0001. Conclusions. SD-OCT–based GCL+IPL thickness measurements in patients with early glaucoma are highly reproducible. PMID:24045993

  9. Implementation of a cellular neural network-based segmentation algorithm on the bio-inspired vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Grassi, Giuseppe; Vecchio, Pietro; Arik, Sabri; Yalcin, M. Erhan

    2011-01-01

    Based on the cellular neural network (CNN) paradigm, the bio-inspired (bi-i) cellular vision system is a computing platform consisting of state-of-the-art sensing, cellular sensing-processing and digital signal processing. This paper presents the implementation of a novel CNN-based segmentation algorithm onto the bi-i system. The experimental results, carried out for different benchmark video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frame/sec. Comparisons with existing CNN-based methods show that, even though these methods are from two to six times faster than the proposed one, the conceived approach is more accurate and, consequently, represents a satisfying trade-off between real-time requirements and accuracy.

  10. Retinal image graph-cut segmentation algorithm using multiscale Hessian-enhancement-based nonlocal mean filter.

    PubMed

    Zheng, Jian; Lu, Pei-Rong; Xiang, Dehui; Dai, Ya-Kang; Liu, Zhao-Bang; Kuai, Duo-Jie; Xue, Hui; Yang, Yue-Tao

    2013-01-01

    We propose a new method to enhance and extract the retinal vessels. First, we employ a multiscale Hessian-based filter to compute the maximum response of vessel likeness function for each pixel. By this step, blood vessels of different widths are significantly enhanced. Then, we adopt a nonlocal mean filter to suppress the noise of enhanced image and maintain the vessel information at the same time. After that, a radial gradient symmetry transformation is adopted to suppress the nonvessel structures. Finally, an accurate graph-cut segmentation step is performed using the result of previous symmetry transformation as an initial. We test the proposed approach on the publicly available databases: DRIVE. The experimental results show that our method is quite effective.

  11. An infared polarization image fusion method based on NSCT and fuzzy C-means clustering segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Qian, Weixian; Xu, Mengxi

    2014-11-01

    The integration between polarization and intensity images possessing complementary and discriminative information has emerged as a new and important research area. On the basis of the consideration that the resulting image has different clarity and layering requirement for the target and background, we propose a novel fusion method based on non-subsampled Contourlet transform (NSCT) and fuzzy C-means (FCM) segmentation for IR polarization and light intensity images. First, the polarization characteristic image is derived from fusion of the degree of polarization (DOP) and the angle of polarization (AOP) images using local standard variation and abrupt change degree (ACD) combined criteria. Then, the polarization characteristic image is segmented with FCM algorithm. Meanwhile, the two source images are respectively decomposed by NSCT. The regional energy-weighted and similarity measure are adopted to combine the low-frequency sub-band coefficients of the object. The high-frequency sub-band coefficients of the object boundaries are integrated through the maximum selection rule. In addition, the high-frequency sub-band coefficients of internal objects are integrated by utilizing local variation, matching measure and region feature weighting. The weighted average and maximum rules are employed independently in fusing the low-frequency and high-frequency components of the background. Finally, an inverse NSCT operation is accomplished and the final fused image is obtained. The experimental results illustrate that the proposed IR polarization image fusion algorithm can yield an improved performance in terms of the contrast between artificial target and cluttered background and a more detailed representation of the depicted scene.

  12. Improving reliability of pQCT-derived muscle area and density measures using a watershed algorithm for muscle and fat segmentation.

    PubMed

    Wong, Andy Kin On; Hummel, Kayla; Moore, Cameron; Beattie, Karen A; Shaker, Sami; Craven, B Catharine; Adachi, Jonathan D; Papaioannou, Alexandra; Giangregorio, Lora

    2015-01-01

    In peripheral quantitative computed tomography scans of the calf muscles, segmentation of muscles from subcutaneous fat is challenged by muscle fat infiltration. Threshold-based edge detection segmentation by manufacturer software fails when muscle boundaries are not smooth. This study compared the test-retest precision error for muscle-fat segmentation using the threshold-based edge detection method vs manual segmentation guided by the watershed algorithm. Three clinical populations were investigated: younger adults, older adults, and adults with spinal cord injury (SCI). The watershed segmentation method yielded lower precision error (1.18%-2.01%) and higher (p<0.001) muscle density values (70.2±9.2 mg/cm3) compared with threshold-based edge detection segmentation (1.77%-4.06% error, 67.4±10.3 mg/cm3). This was particularly true for adults with SCI (precision error improved by 1.56% and 2.64% for muscle area and density, respectively). However, both methods still provided acceptable precision with error well under 5%. Bland-Altman analyses showed that the major discrepancies between the segmentation methods were found mostly among participants with SCI where more muscle fat infiltration was present. When examining a population where fatty infiltration into muscle is expected, the watershed algorithm is recommended for muscle density and area measurement to enable the detection of smaller change effect sizes. Copyright © 2015 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  13. Segmentation of Multi-Temporal Envisat ASAR and HJ-1B Optical Data Using an Edge-Aware Region Growing and Merging Algorithm

    NASA Astrophysics Data System (ADS)

    Jacob, Alexander; Ban, Yifang

    2013-01-01

    The paper aims to develop image segmentation algorithms for classification of multi-sensor data in urban areas. For this purpose an algorithm called KTHSEG has been developed using an edge-aware region growing and merging algorithm. Four-date ENVISAT ASAR C-HH data and one-date HJ-1B covering the city of Shanghai acquired during the vegetation season of 2009 were selected this research. The results show that the segmentation algorithm is effective for urban land cover classification using SAR and optical data. The results also confirm that the fusion of SAR and optical data is beneficial for urban land cover mapping. Further, the study showed that the combination of one SAR and one optical scene is enough to achieve good results and the addition of multitemporal SAR data from the same beam mode does not improve classification accuracy.

  14. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    PubMed Central

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  15. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

  16. Validation of clinical acceptability of an atlas-based segmentation algorithm for the delineation of organs at risk in head and neck cancer

    SciTech Connect

    Hoang Duc, Albert K. McClelland, Jamie; Modat, Marc; Cardoso, M. Jorge; Mendelson, Alex F.; Eminowicz, Gemma; Mendes, Ruheena; Wong, Swee-Ling; D’Souza, Derek; Veiga, Catarina; Kadir, Timor; Ourselin, Sebastien

    2015-09-15

    Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) was then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic

  17. Facial anatomy.

    PubMed

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  18. High-resolution CISS MR imaging with and without contrast for evaluation of the upper cranial nerves: segmental anatomy and selected pathologic conditions of the cisternal through extraforaminal segments.

    PubMed

    Blitz, Ari M; Macedo, Leonardo L; Chonka, Zachary D; Ilica, Ahmet T; Choudhri, Asim F; Gallia, Gary L; Aygun, Nafi

    2014-02-01

    The authors review the course and appearance of the major segments of the upper cranial nerves from their apparent origin at the brainstem through the proximal extraforaminal region, focusing on the imaging and anatomic features of particular relevance to high-resolution magnetic resonance imaging evaluation. Selected pathologic entities are included in the discussion of the corresponding cranial nerve segments for illustrative purposes. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. New algorithm for semiautomatic segmentation of nasal cavity and pharyngeal airway in comparison with manual segmentation using cone-beam computed tomography.

    PubMed

    Alsufyani, Noura A; Hess, Andy; Noga, Michelle; Ray, Nilanjan; Al-Saleh, Mohammed A Q; Lagravère, Manuel O; Major, Paul W

    2016-10-01

    Our objectives were to assess reliability, validity, and time efficiency of semiautomatic segmentation using Segura software of the nasal and pharyngeal airways, against manual segmentation with point-based analysis with color mapping. Pharyngeal and nasal airways from 10 cone-beam computed tomography image sets were segmented manually and semiautomatically using Segura (University of Alberta, Edmonton, Alberta, Canada). To test intraexaminer and interexaminer reliabilities, semiautomatic segmentation was repeated 3 times by 1 examiner and then by 3 examiners. In addition to volume and surface area, point-based analysis was completed to assess the reconstructed 3-dimensional models from Segura against manual segmentation. The times of both methods of segmentation were also recorded to assess time efficiency. The reliability and validity of Segura were excellent (intraclass correlation coefficient, >0.9 for volume and surface area). Part analysis showed small differences between the Segura and manually segmented 3-dimensional models (greatest difference did not exceed 4.3 mm). Time of segmentation using Segura was significantly shorter than that for manual segmentation, 49 ± 11.0 vs 109 ± 9.4 minutes (P <0.001). Semiautomatic segmentation of the pharyngeal and nasal airways using Segura was found to be reliable, valid, and time efficient. Part analysis with color mapping was the key to explaining differences in upper airway volume and provides meaningful and clinically relevant analysis of 3-dimensional changes. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  20. Development and evaluation of an algorithm for the computer-assisted segmentation of the human hypothalamus on 7-Tesla magnetic resonance images.

    PubMed

    Schindler, Stephanie; Schönknecht, Peter; Schmidt, Laura; Anwander, Alfred; Strauß, Maria; Trampel, Robert; Bazin, Pierre-Louis; Möller, Harald E; Hegerl, Ulrich; Turner, Robert; Geyer, Stefan

    2013-01-01

    Post mortem studies have shown volume changes of the hypothalamus in psychiatric patients. With 7T magnetic resonance imaging this effect can now be investigated in vivo in detail. To benefit from the sub-millimeter resolution requires an improved segmentation procedure. The traditional anatomical landmarks of the hypothalamus were refined using 7T T1-weighted magnetic resonance images. A detailed segmentation algorithm (unilateral hypothalamus) was developed for colour-coded, histogram-matched images, and evaluated in a sample of 10 subjects. Test-retest and inter-rater reliabilities were estimated in terms of intraclass-correlation coefficients (ICC) and Dice's coefficient (DC). The computer-assisted segmentation algorithm ensured test-retest reliabilities of ICC≥.97 (DC≥96.8) and inter-rater reliabilities of ICC≥.94 (DC = 95.2). There were no significant volume differences between the segmentation runs, raters, and hemispheres. The estimated volumes of the hypothalamus lie within the range of previous histological and neuroimaging results. We present a computer-assisted algorithm for the manual segmentation of the human hypothalamus using T1-weighted 7T magnetic resonance imaging. Providing very high test-retest and inter-rater reliabilities, it outperforms former procedures established at 1.5T and 3T magnetic resonance images and thus can serve as a gold standard for future automated procedures.

  1. Development and Evaluation of an Algorithm for the Computer-Assisted Segmentation of the Human Hypothalamus on 7-Tesla Magnetic Resonance Images

    PubMed Central

    Schmidt, Laura; Anwander, Alfred; Strauß, Maria; Trampel, Robert; Bazin, Pierre-Louis; Möller, Harald E.; Hegerl, Ulrich; Turner, Robert; Geyer, Stefan

    2013-01-01

    Post mortem studies have shown volume changes of the hypothalamus in psychiatric patients. With 7T magnetic resonance imaging this effect can now be investigated in vivo in detail. To benefit from the sub-millimeter resolution requires an improved segmentation procedure. The traditional anatomical landmarks of the hypothalamus were refined using 7T T1-weighted magnetic resonance images. A detailed segmentation algorithm (unilateral hypothalamus) was developed for colour-coded, histogram-matched images, and evaluated in a sample of 10 subjects. Test-retest and inter-rater reliabilities were estimated in terms of intraclass-correlation coefficients (ICC) and Dice's coefficient (DC). The computer-assisted segmentation algorithm ensured test-retest reliabilities of ICC≥.97 (DC≥96.8) and inter-rater reliabilities of ICC≥.94 (DC = 95.2). There were no significant volume differences between the segmentation runs, raters, and hemispheres. The estimated volumes of the hypothalamus lie within the range of previous histological and neuroimaging results. We present a computer-assisted algorithm for the manual segmentation of the human hypothalamus using T1-weighted 7T magnetic resonance imaging. Providing very high test-retest and inter-rater reliabilities, it outperforms former procedures established at 1.5T and 3T magnetic resonance images and thus can serve as a gold standard for future automated procedures. PMID:23935821

  2. Validation and Development of a New Automatic Algorithm for Time-Resolved Segmentation of the Left Ventricle in Magnetic Resonance Imaging.

    PubMed

    Tufvesson, Jane; Hedström, Erik; Steding-Ehrenborg, Katarina; Carlsson, Marcus; Arheden, Håkan; Heiberg, Einar

    2015-01-01

    Manual delineation of the left ventricle is clinical standard for quantification of cardiovascular magnetic resonance images despite being time consuming and observer dependent. Previous automatic methods generally do not account for one major contributor to stroke volume, the long-axis motion. Therefore, the aim of this study was to develop and validate an automatic algorithm for time-resolved segmentation covering the whole left ventricle, including basal slices affected by long-axis motion. Ninety subjects imaged with a cine balanced steady state free precession sequence were included in the study (training set n = 40, test set n = 50). Manual delineation was reference standard and second observer analysis was performed in a subset (n = 25). The automatic algorithm uses deformable model with expectation-maximization, followed by automatic removal of papillary muscles and detection of the outflow tract. The mean differences between automatic segmentation and manual delineation were EDV -11 mL, ESV 1 mL, EF -3%, and LVM 4 g in the test set. The automatic LV segmentation algorithm reached accuracy comparable to interobserver for manual delineation, thereby bringing automatic segmentation one step closer to clinical routine. The algorithm and all images with manual delineations are available for benchmarking.

  3. Segmentation and image navigation in digitized spine x rays

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    2000-06-01

    The National Library of Medicine has archived a collection of 17,000 digitized x-rays of the cervical and lumbar spines. Extensive health information has been collected on the subjects of these x-rays, but no information has been derived from the image contents themselves. We are researching algorithms to segment anatomy in these images and to derive from the segmented data measurements useful for indexing this image set for characteristics important to researchers in rheumatology, bone morphometry, and related areas. Active Shape Modeling is currently being investigated for use in location and boundary definition for the vertebrae in these images.

  4. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  5. A modified Seeded Region Growing algorithm for vessel segmentation in breast MRI images for investigating the nature of potential lesions

    NASA Astrophysics Data System (ADS)

    Glotsos, D.; Vassiou, K.; Kostopoulos, S.; Lavdas, El; Kalatzis, I.; Asvestas, P.; Arvanitis, D. L.; Fezoulidis, I. V.; Cavouras, D.

    2014-03-01

    The role of Magnetic Resonance Imaging (MRI) as an alternative protocol for screening of breast cancer has been intensively investigated during the past decade. Preliminary research results have indicated that gadolinium-agent administrative MRI scans may reveal the nature of breast lesions by analyzing the contrast-agent's uptake time. In this study, we attempt to deduce the same conclusion, however, from a different perspective by investigating, using image processing, the vascular network of the breast at two different time intervals following the administration of gadolinium. Twenty cases obtained from a 3.0-T MRI system (SIGNA HDx; GE Healthcare) were included in the study. A new modification of the Seeded Region Growing (SRG) algorithm was used to segment vessels from surrounding background. Delineated vessels were investigated by means of their topology, morphology and texture. Results have shown that it is possible to estimate the nature of the lesions with approximately 94.4% accuracy, thus, it may be claimed that the breast vascular network does encodes useful, patterned, information, which can be used for characterizing breast lesions.

  6. Identification of linear features at geothermal field based on Segment Tracing Algorithm (STA) of the ALOS PALSAR data

    NASA Astrophysics Data System (ADS)

    Haeruddin; Saepuloh, A.; Heriawan, M. N.; Kubo, T.

    2016-09-01

    Indonesia has about 40% of geothermal energy resources in the world. An area with the potential geothermal energy in Indonesia is Wayang Windu located at West Java Province. The comprehensive understanding about the geothermal system in this area is indispensable for continuing the development. A geothermal system generally associated with joints or fractures and served as the paths for the geothermal fluid migrating to the surface. The fluid paths are identified by the existence of surface manifestations such as fumaroles, solfatara and the presence of alteration minerals. Therefore the analyses of the liner features to geological structures are crucial for identifying geothermal potential. Fractures or joints in the form of geological structures are associated with the linear features in the satellite images. The Segment Tracing Algorithm (STA) was used for the basis to determine the linear features. In this study, we used satellite images of ALOS PALSAR in Ascending and Descending orbit modes. The linear features obtained by satellite images could be validated by field observations. Based on the application of STA to the ALOS PALSAR data, the general direction of extracted linear features were detected in WNW-ESE, NNE-SSW and NNW-SSE. The directions are consistent with the general direction of faults system in the field. The linear features extracted from ALOS PALSAR data based on STA were very useful to identify the fractured zones at geothermal field.

  7. Active Segmentation

    PubMed Central

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary. We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach. PMID:20686671

  8. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation.

    PubMed

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-21

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δφ = 0.3 ± 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66 ± 0.04), Positive Predictive Value (PPV  = 0.81 ± 0.06) and Sensitivity (Sen. = 0.49 ± 0.05). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40 ± 30, DSC = 0.71 ± 0.07 and PPV = 0.90 ± 0.13). High accuracy in target tracking position (ΔME) was obtained for experimental and clinical data (ΔME(exp) = 0 ± 3 mm; ΔME(clin) 0.3 ± 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements

  9. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    NASA Astrophysics Data System (ADS)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  10. Effect of different segmentation algorithms on metabolic tumor volume measured on 18F-FDG PET/CT of cervical primary squamous cell carcinoma

    PubMed Central

    Xu, Weina; Yu, Shupeng; Ma, Ying; Liu, Changping

    2017-01-01

    Background and purpose It is known that fluorine-18 fluorodeoxyglucose PET/computed tomography (CT) segmentation algorithms have an impact on the metabolic tumor volume (MTV). This leads to some uncertainties in PET/CT guidance of tumor radiotherapy. The aim of this study was to investigate the effect of segmentation algorithms on the PET/CT-based MTV and their correlations with the gross tumor volumes (GTVs) of cervical primary squamous cell carcinoma. Materials and methods Fifty-five patients with International Federation of Gynecology and Obstetrics stage Ia∼IIb and histologically proven cervical squamous cell carcinoma were enrolled. A fluorine-18 fluorodeoxyglucose PET/CT scan was performed before definitive surgery. GTV was measured on surgical specimens. MTVs were estimated on PET/CT scans using different segmentation algorithms, including a fixed percentage of the maximum standardized uptake value (20∼60% SUVmax) threshold and iterative adaptive algorithm. We divided all patients into four different groups according to the SUVmax within target volume. The comparisons of absolute values and percentage differences between MTVs by segmentation and GTV were performed in different SUVmax subgroups. The optimal threshold percentage was determined from MTV20%∼MTV60%, and was correlated with SUVmax. The correlation of MTViterative adaptive with GTV was also investigated. Results MTV50% and MTV60% were similar to GTV in the SUVmax up to 5 (P>0.05). MTV30%∼MTV60% were similar to GTV (P>0.05) in the 50.05) in the 100.05) in the SUVmax of at least 15 group. MTViterative adaptive was similar to GTV in both total and different SUVmax groups (P>0.05). Significant differences were observed among the fixed percentage method and the optimal threshold percentage was inversely correlated with SUVmax. The iterative adaptive segmentation algorithm led

  11. Heart Anatomy

    MedlinePlus

    ... español An Incredible Machine Bonus poster (PDF) The Human Heart Anatomy Blood The Conduction System The Coronary Arteries The Heart Valves The Heartbeat Vasculature of the Arm Vasculature of the Head Vasculature of the Leg Vasculature of the Torso ...

  12. White matter lesion segmentation using machine learning and weakly labeled MR images

    NASA Astrophysics Data System (ADS)

    Xie, Yuchen; Tao, Xiaodong

    2011-03-01

    We propose a fast, learning-based algorithm for segmenting white matter (WM) lesions for magnetic resonance (MR) brain images. The inputs to the algorithm are T1, T2, and FLAIR images. Unlike most of the previously reported learning-based algorithms, which treat expert labeled lesion map as ground truth in the training step, the proposed algorithm only requires the user to provide a few regions of interest (ROI's) containing lesions. An unsupervised clustering algorithm is applied to segment these ROI's into areas. Based on the assumption that lesion voxels have higher intensity on FLAIR image, areas corresponding to lesions are identified and their probability distributions in T1, T2, and FLAIR images are computed. The lesion segmentation in 3D is done by using the probability distributions to generate a confidence map of lesion and applying a graph based segmentation algorithm to label lesion voxels. The initial lesion label is used to further refine the probability distribution estimation for the final lesion segmentation. The advantages of the proposed algorithm are: 1. By using the weak labels, we reduced the dependency of the segmentation performance on the expert discrimination of lesion voxels in the training samples; 2. The training can be done using labels generated by users with only general knowledge of brain anatomy and image characteristics of WM lesion, instead of these carefully labeled by experienced radiologists; 3. The algorithm is fast enough to make interactive segmentation possible. We test the algorithm on nine ACCORD-MIND MRI datasets. Experimental results show that our algorithm agrees well with expert labels and outperforms a support vector machine based WM lesion segmentation algorithm.

  13. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm

    PubMed Central

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell’s image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method. PMID:28553582

  14. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm.

    PubMed

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell's image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method.

  15. The Anatomy of Learning Anatomy

    ERIC Educational Resources Information Center

    Wilhelmsson, Niklas; Dahlgren, Lars Owe; Hult, Hakan; Scheja, Max; Lonka, Kirsti; Josephson, Anna

    2010-01-01

    The experience of clinical teachers as well as research results about senior medical students' understanding of basic science concepts has much been debated. To gain a better understanding about how this knowledge-transformation is managed by medical students, this work aims at investigating their ways of setting about learning anatomy.…

  16. The Anatomy of Learning Anatomy

    ERIC Educational Resources Information Center

    Wilhelmsson, Niklas; Dahlgren, Lars Owe; Hult, Hakan; Scheja, Max; Lonka, Kirsti; Josephson, Anna

    2010-01-01

    The experience of clinical teachers as well as research results about senior medical students' understanding of basic science concepts has much been debated. To gain a better understanding about how this knowledge-transformation is managed by medical students, this work aims at investigating their ways of setting about learning anatomy.…

  17. SU-C-BRA-01: Interactive Auto-Segmentation for Bowel in Online Adaptive MRI-Guided Radiation Therapy by Using a Multi-Region Labeling Algorithm

    SciTech Connect

    Lu, Y; Chen, I; Kashani, R; Wan, H; Maughan, N; Muccigrosso, D; Parikh, P

    2016-06-15

    Purpose: In MRI-guided online adaptive radiation therapy, re-contouring of bowel is time-consuming and can impact the overall time of patients on table. The study aims to auto-segment bowel on volumetric MR images by using an interactive multi-region labeling algorithm. Methods: 5 Patients with locally advanced pancreatic cancer underwent fractionated radiotherapy (18–25 fractions each, total 118 fractions) on an MRI-guided radiation therapy system with a 0.35 Tesla magnet and three Co-60 sources. At each fraction, a volumetric MR image of the patient was acquired when the patient was in the treatment position. An interactive two-dimensional multi-region labeling technique based on graph cut solver was applied on several typical MRI images to segment the large bowel and small bowel, followed by a shape based contour interpolation for generating entire bowel contours along all image slices. The resulted contours were compared with the physician’s manual contouring by using metrics of Dice coefficient and Hausdorff distance. Results: Image data sets from the first 5 fractions of each patient were selected (total of 25 image data sets) for the segmentation test. The algorithm segmented the large and small bowel effectively and efficiently. All bowel segments were successfully identified, auto-contoured and matched with manual contours. The time cost by the algorithm for each image slice was within 30 seconds. For large bowel, the calculated Dice coefficients and Hausdorff distances (mean±std) were 0.77±0.07 and 13.13±5.01mm, respectively; for small bowel, the corresponding metrics were 0.73±0.08and 14.15±4.72mm, respectively. Conclusion: The preliminary results demonstrated the potential of the proposed algorithm in auto-segmenting large and small bowel on low field MRI images in MRI-guided adaptive radiation therapy. Further work will be focused on improving its segmentation accuracy and lessening human interaction.

  18. Evaluation of current algorithms for segmentation of scar tissue from late Gadolinium enhancement cardiovascular magnetic resonance of the left atrium: an open-access grand challenge

    PubMed Central

    2013-01-01

    Background Late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging can be used to visualise regions of fibrosis and scarring in the left atrium (LA) myocardium. This can be important for treatment stratification of patients with atrial fibrillation (AF) and for assessment of treatment after radio frequency catheter ablation (RFCA). In this paper we present a standardised evaluation benchmarking framework for algorithms segmenting fibrosis and scar from LGE CMR images. The algorithms reported are the response to an open challenge that was put to the medical imaging community through an ISBI (IEEE International Symposium on Biomedical Imaging) workshop. Methods The image database consisted of 60 multicenter, multivendor LGE CMR image datasets from patients with AF, with 30 images taken before and 30 after RFCA for the treatment of AF. A reference standard for scar and fibrosis was established by merging manual segmentations from three observers. Furthermore, scar was also quantified using 2, 3 and 4 standard deviations (SD) and full-width-at-half-maximum (FWHM) methods. Seven institutions responded to the challenge: Imperial College (IC), Mevis Fraunhofer (MV), Sunnybrook Health Sciences (SY), Harvard/Boston University (HB), Yale School of Medicine (YL), King’s College London (KCL) and Utah CARMA (UTA, UTB). There were 8 different algorithms evaluated in this study. Results Some algorithms were able to perform significantly better than SD and FWHM methods in both pre- and post-ablation imaging. Segmentation in pre-ablation images was challenging and good correlation with the reference standard was found in post-ablation images. Overlap scores (out of 100) with the reference standard were as follows: Pre: IC = 37, MV = 22, SY = 17, YL = 48, KCL = 30, UTA = 42, UTB = 45; Post: IC = 76, MV = 85, SY = 73, HB = 76, YL = 84, KCL = 78, UTA = 78, UTB = 72. Conclusions The study concludes that currently no algorithm is deemed clearly better than

  19. Evaluation of current algorithms for segmentation of scar tissue from late gadolinium enhancement cardiovascular magnetic resonance of the left atrium: an open-access grand challenge.

    PubMed

    Karim, Rashed; Housden, R James; Balasubramaniam, Mayuragoban; Chen, Zhong; Perry, Daniel; Uddin, Ayesha; Al-Beyatti, Yosra; Palkhi, Ebrahim; Acheampong, Prince; Obom, Samantha; Hennemuth, Anja; Lu, Yingli; Bai, Wenjia; Shi, Wenzhe; Gao, Yi; Peitgen, Heinz-Otto; Radau, Perry; Razavi, Reza; Tannenbaum, Allen; Rueckert, Daniel; Cates, Josh; Schaeffter, Tobias; Peters, Dana; MacLeod, Rob; Rhode, Kawal

    2013-12-20

    Late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging can be used to visualise regions of fibrosis and scarring in the left atrium (LA) myocardium. This can be important for treatment stratification of patients with atrial fibrillation (AF) and for assessment of treatment after radio frequency catheter ablation (RFCA). In this paper we present a standardised evaluation benchmarking framework for algorithms segmenting fibrosis and scar from LGE CMR images. The algorithms reported are the response to an open challenge that was put to the medical imaging community through an ISBI (IEEE International Symposium on Biomedical Imaging) workshop. The image database consisted of 60 multicenter, multivendor LGE CMR image datasets from patients with AF, with 30 images taken before and 30 after RFCA for the treatment of AF. A reference standard for scar and fibrosis was established by merging manual segmentations from three observers. Furthermore, scar was also quantified using 2, 3 and 4 standard deviations (SD) and full-width-at-half-maximum (FWHM) methods. Seven institutions responded to the challenge: Imperial College (IC), Mevis Fraunhofer (MV), Sunnybrook Health Sciences (SY), Harvard/Boston University (HB), Yale School of Medicine (YL), King's College London (KCL) and Utah CARMA (UTA, UTB). There were 8 different algorithms evaluated in this study. Some algorithms were able to perform significantly better than SD and FWHM methods in both pre- and post-ablation imaging. Segmentation in pre-ablation images was challenging and good correlation with the reference standard was found in post-ablation images. Overlap scores (out of 100) with the reference standard were as follows: Pre: IC = 37, MV = 22, SY = 17, YL = 48, KCL = 30, UTA = 42, UTB = 45; Post: IC = 76, MV = 85, SY = 73, HB = 76, YL = 84, KCL = 78, UTA = 78, UTB = 72. The study concludes that currently no algorithm is deemed clearly better than others. There is scope for further

  20. Repeatability and Reproducibility of Eight Macular Intra-Retinal Layer Thicknesses Determined by an Automated Segmentation Algorithm Using Two SD-OCT Instruments

    PubMed Central

    Huang, Shenghai; Leng, Lin; Zhu, Dexi; Lu, Fan

    2014-01-01

    Purpose To evaluate the repeatability, reproducibility, and agreement of thickness profile measurements of eight intra-retinal layers determined by an automated algorithm applied to optical coherence tomography (OCT) images from two different instruments. Methods Twenty normal subjects (12 males, 8 females; 24 to 32 years old) were enrolled. Imaging was performed with a custom built ultra-high resolution OCT instrument (UHR-OCT, ∼3 µm resolution) and a commercial RTVue100 OCT (∼5 µm resolution) instrument. An automated algorithm was developed to segment the macular retina into eight layers and quantitate the thickness of each layer. The right eye of each subject was imaged two times by the first examiner using each instrument to assess intra-observer repeatability and once by the second examiner to assess inter-observer reproducibility. The intraclass correlation coefficient (ICC) and coefficients of repeatability and reproducibility (COR) were analyzed to evaluate the reliability. Results The ICCs for the intra-observer repeatability and inter-observer reproducibility of both SD-OCT instruments were greater than 0.945 for the total retina and all intra-retinal layers, except the photoreceptor inner segments, which ranged from 0.051 to 0.643, and the outer segments, which ranged from 0.709 to 0.959. The CORs were less than 6.73% for the total retina and all intra-retinal layers. The total retinal thickness measured by the UHR-OCT was significantly thinner than that measured by the RTVue100. However, the ICC for agreement of the thickness profiles between UHR-OCT and RTVue OCT were greater than 0.80 except for the inner segment and outer segment layers. Conclusions Thickness measurements of the intra-retinal layers determined by the automated algorithm are reliable when applied to images acquired by the UHR-OCT and RTVue100 instruments. PMID:24505345

  1. Discuss on the two algorithms of line-segments and dot-array for region judgement of the sub-satellite purview

    NASA Astrophysics Data System (ADS)

    Nie, Hao; Yang, Mingming; Zhu, Yajie; Zhang, Peng

    2015-04-01

    When satellite is flying on the orbit for special task like solar flare observation, it requires knowing if the sub-satellite purview was in the ocean area. The relative position between sub-satellite point and the coastline is varying, so the observation condition need be judged in real time according to the current orbital elements. The problem is to solve the status of the relative position between the rectangle purview and the multi connected regions formed by the base data of coastline. Usually the Cohen-Sutherland algorithm is adopted to get the status. It divides the earth map to 9 sections by the four lines extended the rectangle sides. Then the coordinate of boundary points of the connected regions in which section should be confirmed. That method traverses all the boundary points for each judgement. In this paper, two algorithms are presented. The one is based on line-segments, another is based on dot-array. And the data preprocessing and judging procedure of the two methods are focused. The peculiarity of two methods is also analyzed. The method of line-segments treats the connected regions as a set of series line segments. In order to solve the problem, the terminals' coordinates of the rectangle purview and the line segments at the same latitude are compared. The method of dot-array translates the whole map to a binary image, which can be equal to a dot array. The value set of the sequence pixels in the dot array is gained. The value of the pixels in the rectangle purview is judged to solve the problem. Those two algorithms consume lower soft resource, and reduce much more comparing times because both of them do not need traverse all the boundary points. The analysis indicates that the real-time performance and consumed resource of the two algorithms are similar for the simple coastline, but the method of dot-array is the choice when coastline is quite complicated.

  2. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    PubMed

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  3. Computerized Segmentation and Characterization of Breast Lesions in Dynamic Contrast-Enhanced MR Images Using Fuzzy c-Means Clustering and Snake Algorithm

    PubMed Central

    Pang, Yachun; Li, Li; Hu, Wenyong; Peng, Yanxia; Liu, Lizhi; Shao, Yuanzhi

    2012-01-01

    This paper presents a novel two-step approach that incorporates fuzzy c-means (FCMs) clustering and gradient vector flow (GVF) snake algorithm for lesions contour segmentation on breast magnetic resonance imaging (BMRI). Manual delineation of the lesions by expert MR radiologists was taken as a reference standard in evaluating the computerized segmentation approach. The proposed algorithm was also compared with the FCMs clustering based method. With a database of 60 mass-like lesions (22 benign and 38 malignant cases), the proposed method demonstrated sufficiently good segmentation performance. The morphological and texture features were extracted and used to classify the benign and malignant lesions based on the proposed computerized segmentation contour and radiologists' delineation, respectively. Features extracted by the computerized characterization method were employed to differentiate the lesions with an area under the receiver-operating characteristic curve (AUC) of 0.968, in comparison with an AUC of 0.914 based on the features extracted from radiologists' delineation. The proposed method in current study can assist radiologists to delineate and characterize BMRI lesion, such as quantifying morphological and texture features and improving the objectivity and efficiency of BMRI interpretation with a certain clinical value. PMID:22952558

  4. Landmark-guided diffeomorphic demons algorithm and its application to automatic segmentation of the whole spine and pelvis in CT images.

    PubMed

    Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni; Shimizu, Akinobu

    2017-03-01

    A fully automatic multiatlas-based method for segmentation of the spine and pelvis in a torso CT volume is proposed. A novel landmark-guided diffeomorphic demons algorithm is used to register a given CT image to multiple atlas volumes. This algorithm can utilize both grayscale image information and given landmark coordinate information optimally. The segmentation has four steps. Firstly, 170 bony landmarks are detected in the given volume. Using these landmark positions, an atlas selection procedure is performed to reduce the computational cost of the following registration. Then the chosen atlas volumes are registered to the given CT image. Finally, voxelwise label voting is performed to determine the final segmentation result. The proposed method was evaluated using 50 torso CT datasets as well as the public SpineWeb dataset. As a result, a mean distance error of [Formula: see text] and a mean Dice coefficient of [Formula: see text] were achieved for the whole spine and the pelvic bones, which are competitive with other state-of-the-art methods. From the experimental results, the usefulness of the proposed segmentation method was validated.

  5. The Effects of Changing Water Content, Relaxation Times, and Tissue Contrast on Tissue Segmentation and Measures of Cortical Anatomy in MR Images

    PubMed Central

    Bansal, Ravi; Hao, Xuejun; Liu, Feng; Xu, Dongrong; Liu, Jun; Peterson, Bradley S.

    2013-01-01

    Water content is the dominant chemical compound in the brain and it is the primary determinant of tissue contrast in magnetic resonance (MR) images. Water content varies greatly between individuals, and it changes dramatically over time from birth through senescence of the human life span. We hypothesize that the effects that individual- and age-related variations in water content have on contrast of the brain in MR images also has important, systematic effects on in vivo, MRI-based measures of regional brain volumes. We also hypothesize that changes in water content and tissue contrast across time may account for age-related changes in regional volumes, and that differences in water content or tissue contrast across differing neuropsychiatric diagnoses may account for differences in regional volumes across diagnostic groups. We demonstrate in several complementary ways that subtle variations in water content across age and tissue compartments alter tissue contrast, and that changing tissue contrast in turn alters measures of the thickness and volume of the cortical mantle: (1) We derive analytic relations describing how age-related changes in tissue relaxation times produce age-related changes in tissue gray-scale intensity values and tissue contrast; (2) We vary tissue contrast in computer-generated images to assess its effects on tissue segmentation and volumes of gray matter and white matter; and (3) We use real-world imaging data from adults with either Schizophrenia or Bipolar Disorder and age- and sex-matched healthy adults to assess the ways in which variations in tissue contrast across diagnoses affects group differences in tissue segmentation and associated volumes. We conclude that in vivo MRI-based morphological measures of the brain, including regional volumes and measures of cortical thickness, are a product of, or at least are confounded by, differences in tissue contrast across individuals, ages, and diagnostic groups, and that differences in tissue

  6. A novel algorithm based on visual saliency attention for localization and segmentation in rapidly-stained leukocyte images.

    PubMed

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Chen, Zhong

    2014-01-01

    In this paper, we propose a fast hierarchical framework of leukocyte localization and segmentation in rapidly-stained leukocyte images (RSLI) with complex backgrounds and varying illumination. The proposed framework contains two main steps. First, a nucleus saliency model based on average absolute difference is built, which locates each leukocyte precisely while effectively removes dyeing impurities and erythrocyte fragments. Secondly, two different schemes are presented for segmenting the nuclei and cytoplasm respectively. As for nuclei segmentation, to solve the overlap problem between leukocytes, we extract the nucleus lobes first and further group them. The lobes extraction is realized by the histogram-based contrast map and watershed segmentation, taking into account the saliency and similarity of nucleus color. Meanwhile, as for cytoplasm segmentation, to extract the blurry contour of the cytoplasm under instable illumination, we propose a cytoplasm enhancement based on tri-modal histogram specification, which specifically improves the contrast of cytoplasm while maintaining others. Then, the contour of cytoplasm is quickly obtained by extraction based on parameter-controlled adaptive attention window. Furthermore, the contour is corrected by concave points matching in order to solve the overlap between leukocytes and impurities. The experiments show the effectiveness of the proposed nucleus saliency model, which achieves average localization accuracy with F1-measure greater than 95%. In addition, the comparison of single leukocyte segmentation accuracy and running time has demonstrated that the proposed segmentation scheme outperforms the former approaches in RSLI. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Thymus Gland Anatomy

    MedlinePlus

    ... historical Searches are case-insensitive Thymus Gland, Adult, Anatomy Add to My Pictures View /Download : Small: 720x576 ... Large: 3000x2400 View Download Title: Thymus Gland, Adult, Anatomy Description: Anatomy of the thymus gland; drawing shows ...

  8. Normal Pancreas Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Pancreas Anatomy Add to My Pictures View /Download : Small: 761x736 ... View Download Large: 3172x3068 View Download Title: Pancreas Anatomy Description: Anatomy of the pancreas; drawing shows the ...

  9. Improving Cerebellar Segmentation with Statistical Fusion.

    PubMed

    Plassard, Andrew J; Yang, Zhen; Prince, Jerry L; Claassen, Daniel O; Landman, Bennett A

    2016-02-27

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multi-atlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non-Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  10. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  11. Improving Cerebellar Segmentation with Statistical Fusion

    PubMed Central

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-01-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multi-atlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non-Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution. PMID:27127334

  12. Homeomorphic Brain Image Segmentation with Topological and Statistical Atlases

    PubMed Central

    Bazin, Pierre-Louis; Pham, Dzung L.

    2008-01-01

    Atlas-based segmentation techniques are often employed to encode anatomical information for the delineation of multiple structures in magnetic resonance images of the brain. One of the primary challenges of these approaches is to efficiently model qualitative and quantitative anatomical knowledge without introducing a strong bias toward certain anatomical preferences when segmenting new images. This paper explores the use of topological information as a prior and proposes a segmentation framework based on both topological and statistical atlases of brain anatomy. Topology can be used to describe continuity of structures, as well as the relationships between structures, and is often a critical component in cortical surface reconstruction and deformation-based morphometry. Our method guarantees strict topological equivalence between the segmented image and the atlas, and relies only weakly on a statistical atlas of shape. Tissue classification and fast marching methods are used to provide a powerful and flexible framework to handle multiple image contrasts, high levels of noise, gain field inhomogeneities, and variable anatomies. The segmentation algorithm has been validated on simulated and real brain image data and made freely available to researchers. Our experiments demonstrate the accuracy and robustness of the method and the limited influence of the statistical atlas. PMID:18640069

  13. Quantification of left ventricular function and mass in cardiac Dual-Source CT (DSCT) exams: comparison of manual and semiautomatic segmentation algorithms.

    PubMed

    Bastarrika, Gorka; Arraiza, María; Pueyo, Jesús C; Herraiz, María J; Zudaire, Beatriz; Villanueva, Alberto

    2008-05-01

    The purpose of our study was to evaluate reliability of left ventricular (LV) function and mass quantification in cardiac DSCT exams comparing manual contour tracing and a region-growing-based semiautomatic segmentation analysis software. Thirty-three consecutive patients who underwent cardiac DSCT exams were included. Axial 1-mm slices were used for the semiautomated technique, and short-axis 8-mm slice thickness multiphase image reconstructions were the basis for manual contour tracing. Left ventricular volumes, ejection fraction and myocardial mass were assessed by both segmentation methods. Length of time needed for both techniques was also recorded. Left ventricular functional parameters derived from semiautomatic contour detection algorithm were not statistically different from manual tracing and showed an excellent correlation (p<0.001). The semiautomatic contour detection algorithm overestimated LV mass (180.30+/-44.74 g) compared with manual contour tracing (156.07+/-46.29 g) (p<0.001). This software allowed a significant reduction of the time needed for global LV assessment (mean 174.16+/-71.53 s, p<0.001). Objective quantification of LV function using the evaluated region-growing-based semiautomatic segmentation analysis software is feasible, accurate, reliable and time-effective. However, further improvements are needed to equal results achieved by manual contour tracing, especially with regard to LV mass quantification.

  14. Algorithme d'optimisation du profil vertical pour un segment de vol en croisiere avec une contrainte d'heure d'arrivee requise

    NASA Astrophysics Data System (ADS)

    Dancila, Radu Ioan

    This thesis presents the development of an algorithm that determines the optimal vertical navigation (VNAV) profile for an aircraft flying a cruise segment, along a given lateral navigation (LNAV) profile, with a required time of arrival (RTA) constraint. The algorithm is intended for implementation into a Flight Management System (FMS) as a new feature that gives advisory information regarding the optimal VNAV profile. The optimization objective is to minimize the total cost associated with flying the cruise segment while arriving at the end of the segment within an imposed time window. For the vertical navigation profiles yielding a time of arrival within the imposed limits, the degree of fulfillment of the RTA constraint is quantified by a cost proportional with the absolute value of the difference between the actual time of arrival and the RTA. The VNAV profiles evaluated in this thesis are characterized by identical altitudes at the beginning and at the end of the profile, they have no more than one step altitude and are flown at constant speed. The acceleration and deceleration segments are not taken into account. The altitude and speed ranges to be used for the VNAV profiles are specified as input parameters for the algorithm. The algorithm described in this thesis is developed in MATLAB. At each altitude, in the range of altitudes considered for the VNAV profiles, a binary search is performed in order to identify the speed interval that yields a time of arrival compatible with the RTA constraint and the profile that produces a minimum total cost is retained. The performance parameters that determine the total cost for flying a particular VNAV profile, the fuel burn and the flight time, are calculated based on the aircraft's specific performance data and configuration, climb/descent profile, the altitude at the beginning of the VNAV profile, the VNAV and LNAV profiles and the atmospheric conditions. These calculations were validated using data generated by a

  15. Left atrium segmentation for atrial fibrillation ablation

    NASA Astrophysics Data System (ADS)

    Karim, R.; Mohiaddin, R.; Rueckert, D.

    2008-03-01

    Segmentation of the left atrium is vital for pre-operative assessment of its anatomy in radio-frequency catheter ablation (RFCA) surgery. RFCA is commonly used for treating atrial fibrillation. In this paper we present an semi-automatic approach for segmenting the left atrium and the pulmonary veins from MR angiography (MRA) data sets. We also present an automatic approach for further subdividing the segmented atrium into the atrium body and the pulmonary veins. The segmentation algorithm is based on the notion that in MRA the atrium becomes connected to surrounding structures via partial volume affected voxels and narrow vessels, the atrium can be separated if these regions are characterized and identified. The blood pool, obtained by subtracting the pre- and post-contrast scans, is first segmented using a region-growing approach. The segmented blood pool is then subdivided into disjoint subdivisions based on its Euclidean distance transform. These subdivisions are then merged automatically starting from a seed point and stopping at points where the atrium leaks into a neighbouring structure. The resulting merged subdivisions produce the segmented atrium. Measuring the size of the pulmonary vein ostium is vital for selecting the optimal Lasso catheter diameter. We present a second technique for automatically identifying the atrium body from segmented left atrium images. The separating surface between the atrium body and the pulmonary veins gives the ostia locations and can play an important role in measuring their diameters. The technique relies on evolving interfaces modelled using level sets. Results have been presented on 20 patient MRA datasets.

  16. Performance of a simple chromatin-rich segmentation algorithm in quantifying basal cell carcinoma from histology images

    PubMed Central

    2012-01-01

    Background The use of digital imaging and algorithm-assisted identification of regions of interest is revolutionizing the practice of anatomic pathology. Currently automated methods for extracting the tumour regions in basal cell carcinomas are lacking. In this manuscript a colour-deconvolution based tumour extraction algorithm is presented. Findings Haematoxylin and eosin stained basal cell carcinoma histology slides were digitized and analyzed using the open source image analysis program ImageJ. The pixels belonging to tumours were identified by the algorithm, and the performance of the algorithm was evaluated by comparing the pixels identified as malignant with a manually determined dataset. The algorithm achieved superior results with the nodular tumour subtype. Pre-processing using colour deconvolution resulted in a slight decrease in sensitivity, but a significant increase in specificity. The overall sensitivity and specificity of the algorithm was 91.0% and 86.4% respectively, resulting in a positive predictive value of 63.3% and a negative predictive value of 94.2% Conclusions The proposed image analysis algorithm demonstrates the feasibility of automatically extracting tumour regions from digitized basal cell carcinoma histology slides. The proposed algorithm may be adaptable to other stain combinations and tumour types. PMID:22251818

  17. Fast adaptive algorithms for low-level scene analysis: applications of polar exponential grid (PEG) representation to high-speed, scale-and-rotation invariant target segmentation

    SciTech Connect

    Schenker, P.S.; Wong, K.M.; Cande, E.G.

    1981-01-01

    Presents results of experimental studies in image understanding. Two experiments are discussed, one on image correlation and another on target boundary estimation. The experiments are demonstrative of polar exponential grid (peg) representation, an approach to sensory data coding which the authors believe will facilitate problems in 3-d machine perception. The discussion of the image correlation experiment is largely an exposition of the peg-representation concept and approaches to its computer implementation. A robust stochastic, parallel computation segmentation algorithm, the peg parallel hierarchical ripple filter (peg-phrf), is presented. 18 references.

  18. [Forbidden anatomy].

    PubMed

    Holck, Per

    2004-12-16

    Since centuries anatomists have used any course of action in order to get hold of material for dissections, and at the same time avoid prosecution for grave robbery, at times the only way to get hold of cadavers. Stealing newly dead people from the churchyards and offering them for sale to anatomical institutions was not uncommon in the 19th century. "Resurrectionists"--as these thieves were called, as they made the dead "alive"--were seen as necessary for the teaching of anatomy in Victorian Britain. In the 1820s a scandal was revealed in Scotland, when it was discovered that some people even committed murder to make money from supplying anatomists with human cadavers. Two men, William Burke and William Hare, became particularly notorious because of their "business" with the celebrated anatomist Robert Knox in Edinburgh.

  19. Regulatory Anatomy

    PubMed Central

    2015-01-01

    This article proposes the term “safety logics” to understand attempts within the European Union (EU) to harmonize member state legislation to ensure a safe and stable supply of human biological material for transplants and transfusions. With safety logics, I refer to assemblages of discourses, legal documents, technological devices, organizational structures, and work practices aimed at minimizing risk. I use this term to reorient the analytical attention with respect to safety regulation. Instead of evaluating whether safety is achieved, the point is to explore the types of “safety” produced through these logics as well as to consider the sometimes unintended consequences of such safety work. In fact, the EU rules have been giving rise to complaints from practitioners finding the directives problematic and inadequate. In this article, I explore the problems practitioners face and why they arise. In short, I expose the regulatory anatomy of the policy landscape. PMID:26139952

  20. Automated multidetector row CT dataset segmentation with an interactive watershed transform (IWT) algorithm: Part 1. Understanding the IWT technique.

    PubMed

    Heath, David G; Hahn, Horst K; Johnson, Pamela T; Fishman, Elliot K

    2008-12-01

    Segmentation of volumetric computed tomography (CT) datasets facilitates evaluation of 3D CT angiography renderings, particularly with maximum intensity projection displays. This manuscript describes a novel automated bone editing program that uses an interactive watershed transform (IWT) technique to rapidly extract the skeletal structures from the volume. Advantages of this tool include efficient segmentation of large datasets with minimal need for correction. In the first of this two-part series, the principles of the IWT technique are reviewed, followed by a discussion of clinical utility based on our experience.

  1. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  2. Quick Dissection of the Segmental Bronchi

    ERIC Educational Resources Information Center

    Nakajima, Yuji

    2010-01-01

    Knowledge of the three-dimensional anatomy of the bronchopulmonary segments is essential for respiratory medicine. This report describes a quick guide for dissecting the segmental bronchi in formaldehyde-fixed human material. All segmental bronchi are easy to dissect, and thus, this exercise will help medical students to better understand the…

  3. Quick Dissection of the Segmental Bronchi

    ERIC Educational Resources Information Center

    Nakajima, Yuji

    2010-01-01

    Knowledge of the three-dimensional anatomy of the bronchopulmonary segments is essential for respiratory medicine. This report describes a quick guide for dissecting the segmental bronchi in formaldehyde-fixed human material. All segmental bronchi are easy to dissect, and thus, this exercise will help medical students to better understand the…

  4. Automated lung segmentation of low resolution CT scans of rats

    NASA Astrophysics Data System (ADS)

    Rizzo, Benjamin M.; Haworth, Steven T.; Clough, Anne V.

    2014-03-01

    Dual modality micro-CT and SPECT imaging can play an important role in preclinical studies designed to investigate mechanisms, progression, and therapies for acute lung injury in rats. SPECT imaging involves examining the uptake of radiopharmaceuticals within the lung, with the hypothesis that uptake is sensitive to the health or disease status of the lung tissue. Methods of quantifying lung uptake and comparison of right and left lung uptake generally begin with identifying and segmenting the lung region within the 3D reconstructed SPECT volume. However, identification of the lung boundaries and the fissure between the left and right lung is not always possible from the SPECT images directly since the radiopharmaceutical may be taken up by other surrounding tissues. Thus, our SPECT protocol begins with a fast CT scan, the lung boundaries are identified from the CT volume, and the CT region is coregistered with the SPECT volume to obtain the SPECT lung region. Segmenting rat lungs within the CT volume is particularly challenging due to the relatively low resolution of the images and the rat's unique anatomy. Thus, we have developed an automated segmentation algorithm for low resolution micro-CT scans that utilizes depth maps to detect fissures on the surface of the lung volume. The fissure's surface location is in turn used to interpolate the fissure throughout the lung volume. Results indicate that the segmentation method results in left and right lung regions consistent with rat lung anatomy.

  5. Algorithm for localized adaptive diffuse optical tomography and its application in bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Naser, Mohamed A.; Patterson, Michael S.; Wong, John W.

    2014-04-01

    A reconstruction algorithm for diffuse optical tomography based on diffusion theory and finite element method is described. The algorithm reconstructs the optical properties in a permissible domain or region-of-interest to reduce the number of unknowns. The algorithm can be used to reconstruct optical properties for a segmented object (where a CT-scan or MRI is available) or a non-segmented object. For the latter, an adaptive segmentation algorithm merges contiguous regions with similar optical properties thereby reducing the number of unknowns. In calculating the Jacobian matrix the algorithm uses an efficient direct method so the required time is comparable to that needed for a single forward calculation. The reconstructed optical properties using segmented, non-segmented, and adaptively segmented 3D mouse anatomy (MOBY) are used to perform bioluminescence tomography (BLT) for two simulated internal sources. The BLT results suggest that the accuracy of reconstruction of total source power obtained without the segmentation provided by an auxiliary imaging method such as x-ray CT is comparable to that obtained when using perfect segmentation.

  6. Automatic segmentation of the optic nerves and chiasm in CT and MR using the atlas-navigated optimal medial axis and deformable-model algorithm

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Dawant, Benoit M.

    2009-02-01

    In recent years, radiation therapy has become the preferred treatment for many types of head and neck tumors. To minimize side effects, radiation beams are planned pre-operatively to avoid over-radiation of vital structures, such as the optic nerves and chiasm, which are essential to the visual process. To plan the procedure, these structures must be identified using CT/MR imagery. Currently, a radiation oncologist must manually segment the structures, which is both inefficient and ineffective. Clearly an automated approach could be beneficial to the planning process. The problem is difficult due to the shape variability and low image contrast of the structures, and several attempts at automatic localization have been reported with marginal results. In this work we present a novel method for localizing the optic nerves and chiasm in CT/MR volumes using the atlas-navigated optimal medial axis and deformable-model algorithm (NOMAD). NOMAD uses a statistical model and image registration to provide a priori local intensity and shape information to both a medial axis extraction procedure and a deformable-model, which deforms the medial axis and completes the segmentation process. This approach achieves mean dice coefficients greater than 0.8 for both the optic nerves and the chiasm when compared to manual segmentations over ten test cases. By comparing quantitative results with existing techniques it can be seen that this method produces more accurate results.

  7. Nail anatomy.

    PubMed

    de Berker, David</