Sample records for image segmentation analysis

  1. Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.

    2005-01-01

    NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.

  2. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  3. An image segmentation method for apple sorting and grading using support vector machine and Otsu's method

    USDA-ARS?s Scientific Manuscript database

    Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...

  4. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  5. Identification of uncommon objects in containers

    DOEpatents

    Bremer, Peer-Timo; Kim, Hyojin; Thiagarajan, Jayaraman J.

    2017-09-12

    A system for identifying in an image an object that is commonly found in a collection of images and for identifying a portion of an image that represents an object based on a consensus analysis of segmentations of the image. The system collects images of containers that contain objects for generating a collection of common objects within the containers. To process the images, the system generates a segmentation of each image. The image analysis system may also generate multiple segmentations for each image by introducing variations in the selection of voxels to be merged into a segment. The system then generates clusters of the segments based on similarity among the segments. Each cluster represents a common object found in the containers. Once the clustering is complete, the system may be used to identify common objects in images of new containers based on similarity between segments of images and the clusters.

  6. Utilizing Hierarchical Segmentation to Generate Water and Snow Masks to Facilitate Monitoring Change with Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.; Plaza, Antonio J.

    2006-01-01

    The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product.

  7. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  8. Study on the application of MRF and the D-S theory to image segmentation of the human brain and quantitative analysis of the brain tissue

    NASA Astrophysics Data System (ADS)

    Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang

    2012-01-01

    The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.

  9. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  10. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

  11. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications

    PubMed Central

    Despotović, Ivana

    2015-01-01

    Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121

  12. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  13. 3D Texture Features Mining for MRI Brain Tumor Identification

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra

    2014-03-01

    Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.

  14. A region-based segmentation of tumour from brain CT images using nonlinear support vector machine classifier.

    PubMed

    Nanthagopal, A Padma; Rajamony, R Sukanesh

    2012-07-01

    The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.

  15. Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation

    PubMed Central

    Maji, Pradipta; Roy, Shaswati

    2015-01-01

    Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961

  16. Applications of magnetic resonance image segmentation in neurology

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu

    1999-05-01

    After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.

  17. Three-dimensional murine airway segmentation in micro-CT images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.

    2007-03-01

    Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.

  18. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    PubMed

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  19. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  20. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis

    PubMed Central

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878

  1. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.

    PubMed

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.

  2. Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.

    PubMed

    Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A

    2011-01-01

    Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.

  3. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  4. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms

    PubMed Central

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

    2013-01-01

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  5. An interactive method based on the live wire for segmentation of the breast in mammography images.

    PubMed

    Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  6. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    PubMed

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  8. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  9. Metric Learning to Enhance Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Castano, Rebecca; Bue, Brian; Gilmore, Martha S.

    2013-01-01

    Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an over-segmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis. In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.

  10. Impact of CT perfusion imaging on the assessment of peripheral chronic pulmonary thromboembolism: clinical experience in 62 patients.

    PubMed

    Le Faivre, Julien; Duhamel, Alain; Khung, Suonita; Faivre, Jean-Baptiste; Lamblin, Nicolas; Remy, Jacques; Remy-Jardin, Martine

    2016-11-01

    To evaluate the impact of CT perfusion imaging on the detection of peripheral chronic pulmonary embolisms (CPE). 62 patients underwent a dual-energy chest CT angiographic examination with (a) reconstruction of diagnostic and perfusion images; (b) enabling depiction of vascular features of peripheral CPE on diagnostic images and perfusion defects (20 segments/patient; total: 1240 segments examined). The interpretation of diagnostic images was of two types: (a) standard (i.e., based on cross-sectional images alone) or (b) detailed (i.e., based on cross-sectional images and MIPs). The segment-based analysis showed (a) 1179 segments analyzable on both imaging modalities and 61 segments rated as nonanalyzable on perfusion images; (b) the percentage of diseased segments was increased by 7.2 % when perfusion imaging was compared to the detailed reading of diagnostic images, and by 26.6 % when compared to the standard reading of images. At a patient level, the extent of peripheral CPE was higher on perfusion imaging, with a greater impact when compared to the standard reading of diagnostic images (number of patients with a greater number of diseased segments: n = 45; 72.6 % of the study population). Perfusion imaging allows recognition of a greater extent of peripheral CPE compared to diagnostic imaging. • Dual-energy computed tomography generates standard diagnostic imaging and lung perfusion analysis. • Depiction of CPE on central arteries relies on standard diagnostic imaging. • Detection of peripheral CPE is improved by perfusion imaging.

  11. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  12. Thermal image analysis using the serpentine method

    NASA Astrophysics Data System (ADS)

    Koprowski, Robert; Wilczyński, Sławomir

    2018-03-01

    Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.

  13. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  14. Aberration correction in wide-field fluorescence microscopy by segmented-pupil image interferometry.

    PubMed

    Scrimgeour, Jan; Curtis, Jennifer E

    2012-06-18

    We present a new technique for the correction of optical aberrations in wide-field fluorescence microscopy. Segmented-Pupil Image Interferometry (SPII) uses a liquid crystal spatial light modulator placed in the microscope's pupil plane to split the wavefront originating from a fluorescent object into an array of individual beams. Distortion of the wavefront arising from either system or sample aberrations results in displacement of the images formed from the individual pupil segments. Analysis of image registration allows for the local tilt in the wavefront at each segment to be corrected with respect to a central reference. A second correction step optimizes the image intensity by adjusting the relative phase of each pupil segment through image interferometry. This ensures that constructive interference between all segments is achieved at the image plane. Improvements in image quality are observed when Segmented-Pupil Image Interferometry is applied to correct aberrations arising from the microscope's optical path.

  15. A Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation

    DTIC Science & Technology

    1992-01-30

    A Real Time Syte for M~ulti- sensor Image Analysis S. E I0 through Pyramidal Segmentation/ / c •) L. Rudin, S. Osher, G. Koepfler, J.9. Morel 7. ytu...experiments with reconnaissance photography, multi- sensor satellite imagery, medical CT and MRI multi-band data have shown a great practi- cal potential...C ,SF _/ -- / WSM iS-I-0-d41-40450 $tltwt, kw" I (nor.- . Z-97- A real-time system for multi- sensor image analysis through pyramidal segmentation

  16. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  17. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  18. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  19. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  20. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  1. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    NASA Astrophysics Data System (ADS)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  2. Microscopy image segmentation tool: Robust image data analysis

    NASA Astrophysics Data System (ADS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  3. A Review on Segmentation of Positron Emission Tomography Images

    PubMed Central

    Foster, Brent; Bagci, Ulas; Mansoor, Awais; Xu, Ziyue; Mollura, Daniel J.

    2014-01-01

    Positron Emission Tomography (PET), a non-invasive functional imaging method at the molecular level, images the distribution of biologically targeted radiotracers with high sensitivity. PET imaging provides detailed quantitative information about many diseases and is often used to evaluate inflammation, infection, and cancer by detecting emitted photons from a radiotracer localized to abnormal cells. In order to differentiate abnormal tissue from surrounding areas in PET images, image segmentation methods play a vital role; therefore, accurate image segmentation is often necessary for proper disease detection, diagnosis, treatment planning, and follow-ups. In this review paper, we present state-of-the-art PET image segmentation methods, as well as the recent advances in image segmentation techniques. In order to make this manuscript self-contained, we also briefly explain the fundamentals of PET imaging, the challenges of diagnostic PET image analysis, and the effects of these challenges on the segmentation results. PMID:24845019

  4. Elimination of RF inhomogeneity effects in segmentation.

    PubMed

    Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay

    2007-01-01

    There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.

  5. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)

    2015-01-01

    Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  6. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George

    2013-01-01

    Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  7. Threshold-based segmentation of fluorescent and chromogenic images of microglia, astrocytes and oligodendrocytes in FIJI.

    PubMed

    Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una

    2018-02-01

    Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  9. Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin

    PubMed Central

    2014-01-01

    Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154

  10. Axial segmentation of lungs CT scan images using canny method and morphological operation

    NASA Astrophysics Data System (ADS)

    Noviana, Rina; Febriani, Rasal, Isram; Lubis, Eva Utari Cintamurni

    2017-08-01

    Segmentation is a very important topic in digital image process. It is found simply in varied fields of image analysis, particularly within the medical imaging field. Axial segmentation of lungs CT scan is beneficial in designation of abnormalities and surgery planning. It will do to ascertain every section within the lungs. The results of the segmentation are accustomed discover the presence of nodules. The method which utilized in this analysis are image cropping, image binarization, Canny edge detection and morphological operation. Image cropping is done so as to separate the lungs areas, that is the region of interest. Binarization method generates a binary image that has 2 values with grey level, that is black and white (ROI), from another space of lungs CT scan image. Canny method used for the edge detection. Morphological operation is applied to smoothing the lungs edge. The segmentation methodology shows an honest result. It obtains an awfully smooth edge. Moreover, the image background can also be removed in order to get the main focus, the lungs.

  11. a New Improved Threshold Segmentation Method for Scanning Images of Reservoir Rocks Considering Pore Fractal Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua

    Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.

  12. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  13. Segmentation of medical images using explicit anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee

    1999-07-01

    Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.

  14. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    PubMed

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido

    2012-02-01

    Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.

  16. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  17. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  18. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  19. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J; Nishikawa, R; Reiser, I

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benignmore » or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification performance. The best segmentation Result does not necessarily lead to the best classification Result. This work has been supported in part by grants from the NIH R21-EB015053. R Nishikawa is receives royalties form Hologic, Inc.« less

  20. Knowledge-based low-level image analysis for computer vision systems

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  1. Segmentation of radiographic images under topological constraints: application to the femur.

    PubMed

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-09-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.

  2. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  3. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  4. White blood cell counting analysis of blood smear images using various segmentation strategies

    NASA Astrophysics Data System (ADS)

    Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.

  5. Rapid Phenotyping of Root Systems of Brachypodium Plants Using X-ray Computed Tomography: a Comparative Study of Soil Types and Segmentation Tools

    NASA Astrophysics Data System (ADS)

    Varga, T.; McKinney, A. L.; Bingham, E.; Handakumbura, P. P.; Jansson, C.

    2017-12-01

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as in processes with important implications to farming and thus human food supply. X-ray computed tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. Selected Brachypodium distachyon phenotypes were grown in both natural and artificial soil mixes. The specimens were imaged by XCT, and the root architectures were extracted from the data using three different software-based methods; RooTrak, ImageJ-based WEKA segmentation, and the segmentation feature in VG Studio MAX. The 3D root image was successfully segmented at 30 µm resolution by all three methods. In this presentation, ease of segmentation and the accuracy of the extracted quantitative information (root volume and surface area) will be compared between soil types and segmentation methods. The best route to easy and accurate segmentation and root analysis will be highlighted.

  6. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  7. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  8. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  9. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  10. Towards Automatic Image Segmentation Using Optimised Region Growing Technique

    NASA Astrophysics Data System (ADS)

    Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi

    Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.

  11. Automatic lumen and outer wall segmentation of the carotid artery using deformable three-dimensional models in MR angiography and vessel wall images.

    PubMed

    van 't Klooster, Ronald; de Koning, Patrick J H; Dehnavi, Reza Alizadeh; Tamsma, Jouke T; de Roos, Albert; Reiber, Johan H C; van der Geest, Rob J

    2012-01-01

    To develop and validate an automated segmentation technique for the detection of the lumen and outer wall boundaries in MR vessel wall studies of the common carotid artery. A new segmentation method was developed using a three-dimensional (3D) deformable vessel model requiring only one single user interaction by combining 3D MR angiography (MRA) and 2D vessel wall images. This vessel model is a 3D cylindrical Non-Uniform Rational B-Spline (NURBS) surface which can be deformed to fit the underlying image data. Image data of 45 subjects was used to validate the method by comparing manual and automatic segmentations. Vessel wall thickness and volume measurements obtained by both methods were compared. Substantial agreement was observed between manual and automatic segmentation; over 85% of the vessel wall contours were segmented successfully. The interclass correlation was 0.690 for the vessel wall thickness and 0.793 for the vessel wall volume. Compared with manual image analysis, the automated method demonstrated improved interobserver agreement and inter-scan reproducibility. Additionally, the proposed automated image analysis approach was substantially faster. This new automated method can reduce analysis time and enhance reproducibility of the quantification of vessel wall dimensions in clinical studies. Copyright © 2011 Wiley Periodicals, Inc.

  12. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  13. Segmentation of deformable organs from medical images using particle swarm optimization and nonlinear shape priors

    NASA Astrophysics Data System (ADS)

    Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi

    2010-03-01

    In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.

  14. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  15. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  16. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  17. Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images

    NASA Technical Reports Server (NTRS)

    Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.

    1999-01-01

    Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.

  18. Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin; Dehmeshki, Jamshid

    2014-04-01

    Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.

  19. A translational registration system for LANDSAT image segments

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Erthal, G. J.; Velasco, F. R. D.; Mascarenhas, N. D. D.

    1983-01-01

    The use of satellite images obtained from various dates is essential for crop forecast systems. In order to make possible a multitemporal analysis, it is necessary that images belonging to each acquisition have pixel-wise correspondence. A system developed to obtain, register and record image segments from LANDSAT images in computer compatible tapes is described. The translational registration of the segments is performed by correlating image edges in different acquisitions. The system was constructed for the Burroughs B6800 computer in ALGOL language.

  20. FISH Finder: a high-throughput tool for analyzing FISH images

    PubMed Central

    Shirley, James W.; Ty, Sereyvathana; Takebayashi, Shin-ichiro; Liu, Xiuwen; Gilbert, David M.

    2011-01-01

    Motivation: Fluorescence in situ hybridization (FISH) is used to study the organization and the positioning of specific DNA sequences within the cell nucleus. Analyzing the data from FISH images is a tedious process that invokes an element of subjectivity. Automated FISH image analysis offers savings in time as well as gaining the benefit of objective data analysis. While several FISH image analysis software tools have been developed, they often use a threshold-based segmentation algorithm for nucleus segmentation. As fluorescence signal intensities can vary significantly from experiment to experiment, from cell to cell, and within a cell, threshold-based segmentation is inflexible and often insufficient for automatic image analysis, leading to additional manual segmentation and potential subjective bias. To overcome these problems, we developed a graphical software tool called FISH Finder to automatically analyze FISH images that vary significantly. By posing the nucleus segmentation as a classification problem, compound Bayesian classifier is employed so that contextual information is utilized, resulting in reliable classification and boundary extraction. This makes it possible to analyze FISH images efficiently and objectively without adjustment of input parameters. Additionally, FISH Finder was designed to analyze the distances between differentially stained FISH probes. Availability: FISH Finder is a standalone MATLAB application and platform independent software. The program is freely available from: http://code.google.com/p/fishfinder/downloads/list Contact: gilbert@bio.fsu.edu PMID:21310746

  1. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  2. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  3. Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images

    PubMed Central

    Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.

    2010-01-01

    High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043

  4. Boundary segmentation for fluorescence microscopy using steerable filters

    NASA Astrophysics Data System (ADS)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  5. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  6. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  7. An ICA-based method for the segmentation of pigmented skin lesions in macroscopic images.

    PubMed

    Cavalcanti, Pablo G; Scharcanski, Jacob; Di Persia, Leandro E; Milone, Diego H

    2011-01-01

    Segmentation is an important step in computer-aided diagnostic systems for pigmented skin lesions, since that a good definition of the lesion area and its boundary at the image is very important to distinguish benign from malignant cases. In this paper a new skin lesion segmentation method is proposed. This method uses Independent Component Analysis to locate skin lesions in the image, and this location information is further refined by a Level-set segmentation method. Our method was evaluated in 141 images and achieved an average segmentation error of 16.55%, lower than the results for comparable state-of-the-art methods proposed in literature.

  8. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    PubMed Central

    Luo, Yaozhong; Liu, Longzhong; Li, Xuelong

    2017-01-01

    Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%). PMID:28536703

  9. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  10. A new method of cardiographic image segmentation based on grammar

    NASA Astrophysics Data System (ADS)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  11. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    PubMed

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-12-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  12. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  13. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  14. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends

    PubMed Central

    Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.

    2015-01-01

    The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351

  15. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  16. Multifractal geometry in analysis and processing of digital retinal photographs for early diagnosis of human diabetic macular edema.

    PubMed

    Tălu, Stefan

    2013-07-01

    The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.

  17. Colour application on mammography image segmentation

    NASA Astrophysics Data System (ADS)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  18. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  19. A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling.

    PubMed

    Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2018-06-01

    Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.

  20. Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation.

    PubMed

    Blessy, S A Praylin Selva; Sulochana, C Helen

    2015-01-01

    Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

  1. Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model

    NASA Astrophysics Data System (ADS)

    Lee, Myungeun; Kim, Jong Hyo

    2012-02-01

    Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.

  2. Web-accessible cervigram automatic segmentation tool

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.

  3. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.

  4. Segmenting Images for a Better Diagnosis

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA's Hierarchical Segmentation (HSEG) software has been adapted by Bartron Medical Imaging, LLC, for use in segmentation feature extraction, pattern recognition, and classification of medical images. Bartron acquired licenses from NASA Goddard Space Flight Center for application of the HSEG concept to medical imaging, from the California Institute of Technology/Jet Propulsion Laboratory to incorporate pattern-matching software, and from Kennedy Space Center for data-mining and edge-detection programs. The Med-Seg[TM] united developed by Bartron provides improved diagnoses for a wide range of medical images, including computed tomography scans, positron emission tomography scans, magnetic resonance imaging, ultrasound, digitized Z-ray, digitized mammography, dental X-ray, soft tissue analysis, and moving object analysis. It also can be used in analysis of soft-tissue slides. Bartron's future plans include the application of HSEG technology to drug development. NASA is advancing it's HSEG software to learn more about the Earth's magnetosphere.

  5. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  6. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    PubMed

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    PubMed

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  8. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  9. Segmentation and Quantitative Analysis of Apoptosis of Chinese Hamster Ovary Cells from Fluorescence Microscopy Images.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2017-06-01

    Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.

  10. Automatic co-segmentation of lung tumor based on random forest in PET-CT images

    NASA Astrophysics Data System (ADS)

    Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian

    2016-03-01

    In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.

  11. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  12. Mammographic images segmentation based on chaotic map clustering algorithm

    PubMed Central

    2014-01-01

    Background This work investigates the applicability of a novel clustering approach to the segmentation of mammographic digital images. The chaotic map clustering algorithm is used to group together similar subsets of image pixels resulting in a medically meaningful partition of the mammography. Methods The image is divided into pixels subsets characterized by a set of conveniently chosen features and each of the corresponding points in the feature space is associated to a map. A mutual coupling strength between the maps depending on the associated distance between feature space points is subsequently introduced. On the system of maps, the simulated evolution through chaotic dynamics leads to its natural partitioning, which corresponds to a particular segmentation scheme of the initial mammographic image. Results The system provides a high recognition rate for small mass lesions (about 94% correctly segmented inside the breast) and the reproduction of the shape of regions with denser micro-calcifications in about 2/3 of the cases, while being less effective on identification of larger mass lesions. Conclusions We can summarize our analysis by asserting that due to the particularities of the mammographic images, the chaotic map clustering algorithm should not be used as the sole method of segmentation. It is rather the joint use of this method along with other segmentation techniques that could be successfully used for increasing the segmentation performance and for providing extra information for the subsequent analysis stages such as the classification of the segmented ROI. PMID:24666766

  13. Multi-scale Gaussian representation and outline-learning based cell image segmentation.

    PubMed

    Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli

    2013-01-01

    High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

  14. Multi-scale Gaussian representation and outline-learning based cell image segmentation

    PubMed Central

    2013-01-01

    Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488

  15. Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.

    PubMed

    Shaheen, Anjuman; Rajpoot, Kashif

    2015-08-01

    Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Metric Learning for Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  17. Analysis of gene expression levels in individual bacterial cells without image segmentation.

    PubMed

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J

    2012-05-11

    Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  19. Sensitivity analysis for high-contrast missions with segmented telescopes

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Sauvage, Jean-François; Pueyo, Laurent; Fusco, Thierry; Soummer, Rémi; N'Diaye, Mamadou; St. Laurent, Kathryn

    2017-09-01

    Segmented telescopes enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures, and segment gaps, makes high-contrast imaging very challenging. In this context, we present an analytical model that will enable to establish a comprehensive error budget to evaluate the constraints on the segments and the influence of the error terms on the final image and contrast. Indeed, the target contrast of 1010 to image Earth-like planets requires drastic conditions, both in term of segment alignment and telescope stability. Despite space telescopes evolving in a more friendly environment than ground-based telescopes, remaining vibrations and resonant modes on the segments can still deteriorate the contrast. In this communication, we develop and validate the analytical model, and compare its outputs to images issued from end-to-end simulations.

  20. Review of automatic detection of pig behaviours by using image analysis

    NASA Astrophysics Data System (ADS)

    Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao

    2017-06-01

    Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.

  1. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer’s general analysis software at much lower cost. Relatively simple processing techniques could lead to customized, unsupervised or partially supervised methods that can successfully perform the desirable analysis and adapt to the specific disease requirements.

  2. An Analysis of Image Segmentation Time in Beam’s-Eye-View Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Chun; Spelbring, D.R.; Chen, George T.Y.

    In this work we tabulate and histogram the image segmentation time for beam’s eye view (BEV) treatment planning in our center. The average time needed to generate contours on CT images delineating normal structures and treatment target volumes is calculated using a data base containing over 500 patients’ BEV plans. The average number of contours and total image segmentation time needed for BEV plans in three common treatment sites, namely, head/neck, lung/chest, and prostate, were estimated.

  3. Statistical representative elementary volumes of porous media determined using greyscale analysis of 3D tomograms

    NASA Astrophysics Data System (ADS)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-09-01

    Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.

  4. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  5. Segmentation of white rat sperm image

    NASA Astrophysics Data System (ADS)

    Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan

    2011-11-01

    The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.

  6. Objects Grouping for Segmentation of Roads Network in High Resolution Images of Urban Areas

    NASA Astrophysics Data System (ADS)

    Maboudi, M.; Amini, J.; Hahn, M.

    2016-06-01

    Updated road databases are required for many purposes such as urban planning, disaster management, car navigation, route planning, traffic management and emergency handling. In the last decade, the improvement in spatial resolution of VHR civilian satellite sensors - as the main source of large scale mapping applications - was so considerable that GSD has become finer than size of common urban objects of interest such as building, trees and road parts. This technological advancement pushed the development of "Object-based Image Analysis (OBIA)" as an alternative to pixel-based image analysis methods. Segmentation as one of the main stages of OBIA provides the image objects on which most of the following processes will be applied. Therefore, the success of an OBIA approach is strongly affected by the segmentation quality. In this paper, we propose a purpose-dependent refinement strategy in order to group road segments in urban areas using maximal similarity based region merging. For investigations with the proposed method, we use high resolution images of some urban sites. The promising results suggest that the proposed approach is applicable in grouping of road segments in urban areas.

  7. Multifractal-based nuclei segmentation in fish images.

    PubMed

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  8. Correction tool for Active Shape Model based lumbar muscle segmentation.

    PubMed

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  9. Southeast Asian palm leaf manuscript images: a review of handwritten text line segmentation methods and new challenges

    NASA Astrophysics Data System (ADS)

    Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc

    2017-01-01

    Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.

  10. Techniques in helical scanning, dynamic imaging and image segmentation for improved quantitative analysis with X-ray micro-CT

    NASA Astrophysics Data System (ADS)

    Sheppard, Adrian; Latham, Shane; Middleton, Jill; Kingston, Andrew; Myers, Glenn; Varslot, Trond; Fogden, Andrew; Sawkins, Tim; Cruikshank, Ron; Saadatfar, Mohammad; Francois, Nicolas; Arns, Christoph; Senden, Tim

    2014-04-01

    This paper reports on recent advances at the micro-computed tomography facility at the Australian National University. Since 2000 this facility has been a significant centre for developments in imaging hardware and associated software for image reconstruction, image analysis and image-based modelling. In 2010 a new instrument was constructed that utilises theoretically-exact image reconstruction based on helical scanning trajectories, allowing higher cone angles and thus better utilisation of the available X-ray flux. We discuss the technical hurdles that needed to be overcome to allow imaging with cone angles in excess of 60°. We also present dynamic tomography algorithms that enable the changes between one moment and the next to be reconstructed from a sparse set of projections, allowing higher speed imaging of time-varying samples. Researchers at the facility have also created a sizeable distributed-memory image analysis toolkit with capabilities ranging from tomographic image reconstruction to 3D shape characterisation. We show results from image registration and present some of the new imaging and experimental techniques that it enables. Finally, we discuss the crucial question of image segmentation and evaluate some recently proposed techniques for automated segmentation.

  11. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  12. Man-made objects cuing in satellite imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2009-01-01

    We present a multi-scale framework for man-made structures cuing in satellite image regions. The approach is based on a hierarchical image segmentation followed by structural analysis. A hierarchical segmentation produces an image pyramid that contains a stack of irregular image partitions, represented as polygonized pixel patches, of successively reduced levels of detail (LOOs). We are jumping off from the over-segmented image represented by polygons attributed with spectral and texture information. The image is represented as a proximity graph with vertices corresponding to the polygons and edges reflecting polygon relations. This is followed by the iterative graph contraction based on Boruvka'smore » Minimum Spanning Tree (MST) construction algorithm. The graph contractions merge the patches based on their pairwise spectral and texture differences. Concurrently with the construction of the irregular image pyramid, structural analysis is done on the agglomerated patches. Man-made object cuing is based on the analysis of shape properties of the constructed patches and their spatial relations. The presented framework can be used as pre-scanning tool for wide area monitoring to quickly guide the further analysis to regions of interest.« less

  13. Optical Coherence Tomography in the UK Biobank Study - Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies.

    PubMed

    Keane, Pearse A; Grossi, Carlota M; Foster, Paul J; Yang, Qi; Reisman, Charles A; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J

    2016-01-01

    To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available "spectral domain" OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging.

  14. Optical Coherence Tomography in the UK Biobank Study – Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies

    PubMed Central

    Grossi, Carlota M.; Foster, Paul J.; Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J.

    2016-01-01

    Purpose To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. Methods In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available “spectral domain” OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. Results 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. Conclusions We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging. PMID:27716837

  15. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  16. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  17. Automatic segmentation of invasive breast carcinomas from dynamic contrast-enhanced MRI using time series analysis.

    PubMed

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva

    2014-08-01

    To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.

  18. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  19. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such as product inspection or assembly of parts in space and industry.

  20. Automated boundary segmentation and wound analysis for longitudinal corneal OCT images

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Shi, Fei; Zhu, Weifang; Pan, Lingjiao; Chen, Haoyu; Huang, Haifan; Zheng, Kangkeng; Chen, Xinjian

    2017-03-01

    Optical coherence tomography (OCT) has been widely applied in the examination and diagnosis of corneal diseases, but the information directly achieved from the OCT images by manual inspection is limited. We propose an automatic processing method to assist ophthalmologists in locating the boundaries in corneal OCT images and analyzing the recovery of corneal wounds after treatment from longitudinal OCT images. It includes the following steps: preprocessing, epithelium and endothelium boundary segmentation and correction, wound detection, corneal boundary fitting and wound analysis. The method was tested on a data set with longitudinal corneal OCT images from 20 subjects. Each subject has five images acquired after corneal operation over a period of time. The segmentation and classification accuracy of the proposed algorithm is high and can be used for analyzing wound recovery after corneal surgery.

  1. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  2. Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    De Vylder, Jonas; Philips, Wilfried

    2011-02-01

    This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.

  3. Molar axis estimation from computed tomography images.

    PubMed

    Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li

    2016-08-01

    Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.

  4. On the importance of FIB-SEM specific segmentation algorithms for porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de; Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de; Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin,more » is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.« less

  5. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  6. RHSEG and Subdue: Background and Preliminary Approach for Combining these Technologies for Enhanced Image Data Analysis, Mining and Knowledge Discovery

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cook, Diane J.

    2008-01-01

    Under a project recently selected for funding by NASA's Science Mission Directorate under the Applied Information Systems Research (AISR) program, Tilton and Cook will design and implement the integration of the Subdue graph based knowledge discovery system, developed at the University of Texas Arlington and Washington State University, with image segmentation hierarchies produced by the RHSEG software, developed at NASA GSFC, and perform pilot demonstration studies of data analysis, mining and knowledge discovery on NASA data. Subdue represents a method for discovering substructures in structural databases. Subdue is devised for general-purpose automated discovery, concept learning, and hierarchical clustering, with or without domain knowledge. Subdue was developed by Cook and her colleague, Lawrence B. Holder. For Subdue to be effective in finding patterns in imagery data, the data must be abstracted up from the pixel domain. An appropriate abstraction of imagery data is a segmentation hierarchy: a set of several segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. The RHSEG program, a recursive approximation to a Hierarchical Segmentation approach (HSEG), can produce segmentation hierarchies quickly and effectively for a wide variety of images. RHSEG and HSEG were developed at NASA GSFC by Tilton. In this presentation we provide background on the RHSEG and Subdue technologies and present a preliminary analysis on how RHSEG and Subdue may be combined to enhance image data analysis, mining and knowledge discovery.

  7. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  8. Cortical Enhanced Tissue Segmentation of Neonatal Brain MR Images Acquired by a Dedicated Phased Array Coil

    PubMed Central

    Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang

    2010-01-01

    The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268

  9. Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.

    PubMed

    Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel

    2017-08-22

    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.

  10. Interactive tele-radiological segmentation systems for treatment and diagnosis.

    PubMed

    Zimeras, S; Gortzis, L G

    2012-01-01

    Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor's opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools), manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.

  11. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields

    PubMed Central

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy. PMID:26630674

  12. Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand

    2018-01-01

    The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.

  13. Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields.

    PubMed

    Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi; Åkerfelt, Malin; Nees, Matthias

    2015-01-01

    Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.

  14. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  15. Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.

    1991-01-01

    Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.

  16. Binary Programming Models of Spatial Pattern Recognition: Applications in Remote Sensing Image Analysis

    DTIC Science & Technology

    1991-12-01

    9 2.6.1 Multi-Shape Detection. .. .. .. .. .. .. ...... 9 Page 2.6.2 Line Segment Extraction and Re-Combination.. 9 2.6.3 Planimetric Feature... Extraction ............... 10 2.6.4 Line Segment Extraction From Statistical Texture Analysis .............................. 11 2.6.5 Edge Following as Graph...image after image, could benefit clue to the fact that major spatial characteristics of subregions could be extracted , and minor spatial changes could be

  17. Skeleton-based region competition for automated gray matter and white matter segmentation of human brain MR images

    NASA Astrophysics Data System (ADS)

    Chu, Yong; Chen, Ya-Fang; Su, Min-Ying; Nalcioglu, Orhan

    2005-04-01

    Image segmentation is an essential process for quantitative analysis. Segmentation of brain tissues in magnetic resonance (MR) images is very important for understanding the structural-functional relationship for various pathological conditions, such as dementia vs. normal brain aging. Different brain regions are responsible for certain functions and may have specific implication for diagnosis. Segmentation may facilitate the analysis of different brain regions to aid in early diagnosis. Region competition has been recently proposed as an effective method for image segmentation by minimizing a generalized Bayes/MDL criterion. However, it is sensitive to initial conditions - the "seeds", therefore an optimal choice of "seeds" is necessary for accurate segmentation. In this paper, we present a new skeleton-based region competition algorithm for automated gray and white matter segmentation. Skeletons can be considered as good "seed regions" since they provide the morphological a priori information, thus guarantee a correct initial condition. Intensity gradient information is also added to the global energy function to achieve a precise boundary localization. This algorithm was applied to perform gray and white matter segmentation using simulated MRI images from a realistic digital brain phantom. Nine different brain regions were manually outlined for evaluation of the performance in these separate regions. The results were compared to the gold-standard measure to calculate the true positive and true negative percentages. In general, this method worked well with a 96% accuracy, although the performance varied in different regions. We conclude that the skeleton-based region competition is an effective method for gray and white matter segmentation.

  18. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  19. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  20. Mathematical morphology for automated analysis of remotely sensed objects in radar images

    NASA Technical Reports Server (NTRS)

    Daida, Jason M.; Vesecky, John F.

    1991-01-01

    A symbiosis of pyramidal segmentation and morphological transmission is described. The pyramidal segmentation portion of the symbiosis has resulted in low (2.6 percent) misclassification error rate for a one-look simulation. Other simulations indicate lower error rates (1.8 percent for a four-look image). The morphological transformation portion has resulted in meaningful partitions with a minimal loss of fractal boundary information. An unpublished version of Thicken, suitable for watersheds transformations of fractal objects, is also presented. It is demonstrated that the proposed symbiosis works with SAR (synthetic aperture radar) images: in this case, a four-look Seasat image of sea ice. It is concluded that the symbiotic forms of both segmentation and morphological transformation seem well suited for unsupervised geophysical analysis.

  1. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  2. Fast and robust segmentation of the striatum using deep convolutional neural networks.

    PubMed

    Choi, Hongyoon; Jin, Kyong Hwan

    2016-12-01

    Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie

    2017-03-01

    Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.

  4. Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation

    PubMed Central

    Yu, Kai; Shi, Fei; Gao, Enting; Zhu, Weifang; Chen, Haoyu; Chen, Xinjian

    2018-01-01

    Optic nerve head (ONH) is a crucial region for glaucoma detection and tracking based on spectral domain optical coherence tomography (SD-OCT) images. In this region, the existence of a “hole” structure makes retinal layer segmentation and analysis very challenging. To improve retinal layer segmentation, we propose a 3D method for ONH centered SD-OCT image segmentation, which is based on a modified graph search algorithm with a shared-hole and locally adaptive constraints. With the proposed method, both the optic disc boundary and nine retinal surfaces can be accurately segmented in SD-OCT images. An overall mean unsigned border positioning error of 7.27 ± 5.40 µm was achieved for layer segmentation, and a mean Dice coefficient of 0.925 ± 0.03 was achieved for optic disc region detection. PMID:29541497

  5. Adaptive segmentation of nuclei in H&S stained tendon microscopy

    NASA Astrophysics Data System (ADS)

    Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien

    2015-12-01

    Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.

  6. Contextually guided very-high-resolution imagery classification with semantic segments

    NASA Astrophysics Data System (ADS)

    Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.

    2017-10-01

    Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).

  7. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  8. A Unified Mathematical Approach to Image Analysis.

    DTIC Science & Technology

    1987-08-31

    describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .

  9. Segmentation of Brain Lesions in MRI and CT Scan Images: A Hybrid Approach Using k-Means Clustering and Image Morphology

    NASA Astrophysics Data System (ADS)

    Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar

    2018-04-01

    Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.

  10. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    PubMed

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  11. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  12. Blood vessel segmentation algorithms - Review of methods, datasets and evaluation metrics.

    PubMed

    Moccia, Sara; De Momi, Elena; El Hadji, Sara; Mattos, Leonardo S

    2018-05-01

    Blood vessel segmentation is a topic of high interest in medical image analysis since the analysis of vessels is crucial for diagnosis, treatment planning and execution, and evaluation of clinical outcomes in different fields, including laryngology, neurosurgery and ophthalmology. Automatic or semi-automatic vessel segmentation can support clinicians in performing these tasks. Different medical imaging techniques are currently used in clinical practice and an appropriate choice of the segmentation algorithm is mandatory to deal with the adopted imaging technique characteristics (e.g. resolution, noise and vessel contrast). This paper aims at reviewing the most recent and innovative blood vessel segmentation algorithms. Among the algorithms and approaches considered, we deeply investigated the most novel blood vessel segmentation including machine learning, deformable model, and tracking-based approaches. This paper analyzes more than 100 articles focused on blood vessel segmentation methods. For each analyzed approach, summary tables are presented reporting imaging technique used, anatomical region and performance measures employed. Benefits and disadvantages of each method are highlighted. Despite the constant progress and efforts addressed in the field, several issues still need to be overcome. A relevant limitation consists in the segmentation of pathological vessels. Unfortunately, not consistent research effort has been addressed to this issue yet. Research is needed since some of the main assumptions made for healthy vessels (such as linearity and circular cross-section) do not hold in pathological tissues, which on the other hand require new vessel model formulations. Moreover, image intensity drops, noise and low contrast still represent an important obstacle for the achievement of a high-quality enhancement. This is particularly true for optical imaging, where the image quality is usually lower in terms of noise and contrast with respect to magnetic resonance and computer tomography angiography. No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  14. Global analysis of microscopic fluorescence lifetime images using spectral segmentation and a digital micromirror spatial illuminator.

    PubMed

    Bednarkiewicz, Artur; Whelan, Maurice P

    2008-01-01

    Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements.

  15. Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels.

    PubMed

    Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R

    2018-01-01

    Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.

  16. Hidden Markov random field model and Broyden-Fletcher-Goldfarb-Shanno algorithm for brain image segmentation

    NASA Astrophysics Data System (ADS)

    Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane

    2018-05-01

    Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.

  17. Novel methods for parameter-based analysis of myocardial tissue in MR images

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.

    2007-03-01

    The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.

  18. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    PubMed

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  19. 3D marker-controlled watershed for kidney segmentation in clinical CT exams.

    PubMed

    Wieclawek, Wojciech

    2018-02-27

    Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.

  20. Mathematical models used in segmentation and fractal methods of 2-D ultrasound images

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Moraru, Luminita; Bibicu, Dorin

    2012-11-01

    Mathematical models are widely used in biomedical computing. The extracted data from images using the mathematical techniques are the "pillar" achieving scientific progress in experimental, clinical, biomedical, and behavioural researches. This article deals with the representation of 2-D images and highlights the mathematical support for the segmentation operation and fractal analysis in ultrasound images. A large number of mathematical techniques are suitable to be applied during the image processing stage. The addressed topics cover the edge-based segmentation, more precisely the gradient-based edge detection and active contour model, and the region-based segmentation namely Otsu method. Another interesting mathematical approach consists of analyzing the images using the Box Counting Method (BCM) to compute the fractal dimension. The results of the paper provide explicit samples performed by various combination of methods.

  1. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    PubMed

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  2. Thigh muscle segmentation of chemical shift encoding-based water-fat magnetic resonance images: The reference database MyoSegmenTUM.

    PubMed

    Schlaeger, Sarah; Freitag, Friedemann; Klupp, Elisabeth; Dieckmeyer, Michael; Weidlich, Dominik; Inhuber, Stephanie; Deschauer, Marcus; Schoser, Benedikt; Bublitz, Sarah; Montagnese, Federica; Zimmer, Claus; Rummeny, Ernst J; Karampinos, Dimitrios C; Kirschke, Jan S; Baum, Thomas

    2018-01-01

    Magnetic resonance imaging (MRI) can non-invasively assess muscle anatomy, exercise effects and pathologies with different underlying causes such as neuromuscular diseases (NMD). Quantitative MRI including fat fraction mapping using chemical shift encoding-based water-fat MRI has emerged for reliable determination of muscle volume and fat composition. The data analysis of water-fat images requires segmentation of the different muscles which has been mainly performed manually in the past and is a very time consuming process, currently limiting the clinical applicability. An automatization of the segmentation process would lead to a more time-efficient analysis. In the present work, the manually segmented thigh magnetic resonance imaging database MyoSegmenTUM is presented. It hosts water-fat MR images of both thighs of 15 healthy subjects and 4 patients with NMD with a voxel size of 3.2x2x4 mm3 with the corresponding segmentation masks for four functional muscle groups: quadriceps femoris, sartorius, gracilis, hamstrings. The database is freely accessible online at https://osf.io/svwa7/?view_only=c2c980c17b3a40fca35d088a3cdd83e2. The database is mainly meant as ground truth which can be used as training and test dataset for automatic muscle segmentation algorithms. The segmentation allows extraction of muscle cross sectional area (CSA) and volume. Proton density fat fraction (PDFF) of the defined muscle groups from the corresponding images and quadriceps muscle strength measurements/neurological muscle strength rating can be used for benchmarking purposes.

  3. Histomolecular interpretation of pleomorphic adenomas of the salivary gland by matrix-assisted laser desorption ionization imaging and spatial segmentation.

    PubMed

    Ernst, Günther; Guntinas-Lichius, Orlando; Hauberg-Lotte, Lena; Trede, Dennis; Becker, Michael; Alexandrov, Theodore; von Eggeling, Ferdinand

    2015-07-01

    Despite efforts in localization of key proteins using immunohistochemistry, the complex proteomic composition of pleomorphic adenomas has not yet been characterized. Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging) allows label-free and spatially resolved detection of hundreds of proteins directly from tissue sections and of histomorphological regions by finding colocalized molecular signals. Spatial segmentation of MALDI imaging data is an algorithmic method for finding regions of similar proteomic composition as functionally similar regions. We investigated 2 pleomorphic adenomas by applying spatial segmentation to the MALDI imaging data of tissue sections. The spatial segmentation subdivided the tissue in a good accordance with the tissue histology. Numerous molecular signals colocalized with histologically defined tissue regions were found. Our study highlights the cellular transdifferentiation within the pleomorphic adenoma. It could be shown that spatial segmentation of MALDI imaging data is a promising approach in the emerging field of digital histological analysis and characterization of tumors. © 2014 Wiley Periodicals, Inc.

  4. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  5. Echogenicity based approach to detect, segment and track the common carotid artery in 2D ultrasound images.

    PubMed

    Narayan, Nikhil S; Marziliano, Pina

    2015-08-01

    Automatic detection and segmentation of the common carotid artery in transverse ultrasound (US) images of the thyroid gland play a vital role in the success of US guided intervention procedures. We propose in this paper a novel method to accurately detect, segment and track the carotid in 2D and 2D+t US images of the thyroid gland using concepts based on tissue echogenicity and ultrasound image formation. We first segment the hypoechoic anatomical regions of interest using local phase and energy in the input image. We then make use of a Hessian based blob like analysis to detect the carotid within the segmented hypoechoic regions. The carotid artery is segmented by making use of least squares ellipse fit for the edge points around the detected carotid candidate. Experiments performed on a multivendor dataset of 41 images show that the proposed algorithm can segment the carotid artery with high sensitivity (99.6 ±m 0.2%) and specificity (92.9 ±m 0.1%). Further experiments on a public database containing 971 images of the carotid artery showed that the proposed algorithm can achieve a detection accuracy of 95.2% with a 2% increase in performance when compared to the state-of-the-art method.

  6. Cell nuclei segmentation in fluorescence microscopy images using inter- and intra-region discriminative information.

    PubMed

    Song, Yang; Cai, Weidong; Feng, David Dagan; Chen, Mei

    2013-01-01

    Automated segmentation of cell nuclei in microscopic images is critical to high throughput analysis of the ever increasing amount of data. Although cell nuclei are generally visually distinguishable for human, automated segmentation faces challenges when there is significant intensity inhomogeneity among cell nuclei or in the background. In this paper, we propose an effective method for automated cell nucleus segmentation using a three-step approach. It first obtains an initial segmentation by extracting salient regions in the image, then reduces false positives using inter-region feature discrimination, and finally refines the boundary of the cell nuclei using intra-region contrast information. This method has been evaluated on two publicly available datasets of fluorescence microscopic images with 4009 cells, and has achieved superior performance compared to popular state of the art methods using established metrics.

  7. Concealed object segmentation and three-dimensional localization with passive millimeter-wave imaging

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon

    2013-05-01

    Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented objects. Experimental results are provided with the analysis of the depth resolution.

  8. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  9. High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.

    PubMed

    Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C

    2007-10-09

    High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.

  10. Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images.

    PubMed

    Rangel-Fonseca, Piero; Gómez-Vieyra, Armando; Malacara-Hernández, Daniel; Wilson, Mario C; Williams, David R; Rossi, Ethan A

    2013-12-01

    Adaptive optics (AO) imaging methods allow the histological characteristics of retinal cell mosaics, such as photoreceptors and retinal pigment epithelium (RPE) cells, to be studied in vivo. The high-resolution images obtained with ophthalmic AO imaging devices are rich with information that is difficult and/or tedious to quantify using manual methods. Thus, robust, automated analysis tools that can provide reproducible quantitative information about the cellular mosaics under examination are required. Automated algorithms have been developed to detect the position of individual photoreceptor cells; however, most of these methods are not well suited for characterizing the RPE mosaic. We have developed an algorithm for RPE cell segmentation and show its performance here on simulated and real fluorescence AO images of the RPE mosaic. Algorithm performance was compared to manual cell identification and yielded better than 91% correspondence. This method can be used to segment RPE cells for morphometric analysis of the RPE mosaic and speed the analysis of both healthy and diseased RPE mosaics.

  11. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images

    PubMed Central

    Tang, Yunwei; Jing, Linhai; Ding, Haifeng

    2017-01-01

    The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416

  12. Extraction of composite visual objects from audiovisual materials

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal

    1999-08-01

    An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.

  13. Volume Segmentation and Ghost Particles

    NASA Astrophysics Data System (ADS)

    Ziskin, Isaac; Adrian, Ronald

    2011-11-01

    Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.

  14. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.

  15. Multi-Modal Glioblastoma Segmentation: Man versus Machine

    PubMed Central

    Pica, Alessia; Schucht, Philippe; Beck, Jürgen; Verma, Rajeev Kumar; Slotboom, Johannes; Reyes, Mauricio; Wiest, Roland

    2014-01-01

    Background and Purpose Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. Methods We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. Results Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. Conclusions In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity. PMID:24804720

  16. Filtering and left ventricle segmentation of the fetal heart in ultrasound images

    NASA Astrophysics Data System (ADS)

    Vargas-Quintero, Lorena; Escalante-Ramírez, Boris

    2013-11-01

    In this paper, we propose to use filtering methods and a segmentation algorithm for the analysis of fetal heart in ultrasound images. Since noise speckle makes difficult the analysis of ultrasound images, the filtering process becomes a useful task in these types of applications. The filtering techniques consider in this work assume that the speckle noise is a random variable with a Rayleigh distribution. We use two multiresolution methods: one based on wavelet decomposition and the another based on the Hermite transform. The filtering process is used as way to strengthen the performance of the segmentation tasks. For the wavelet-based approach, a Bayesian estimator at subband level for pixel classification is employed. The Hermite method computes a mask to find those pixels that are corrupted by speckle. On the other hand, we picked out a method based on a deformable model or "snake" to evaluate the influence of the filtering techniques in the segmentation task of left ventricle in fetal echocardiographic images.

  17. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Evaluation of prognostic models developed using standardised image features from different PET automated segmentation methods.

    PubMed

    Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano

    2018-04-11

    Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.

  19. CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation

    PubMed Central

    2013-01-01

    The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening. PMID:23938087

  20. CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.

    PubMed

    Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid

    2013-08-09

    : The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening.

  1. Local and global evaluation for remote sensing image segmentation

    NASA Astrophysics Data System (ADS)

    Su, Tengfei; Zhang, Shengwei

    2017-08-01

    In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.

  2. Analysis of gene expression levels in individual bacterial cells without image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J., E-mail: sjhagen@ufl.edu

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on amore » segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.« less

  3. A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.

    PubMed

    Noh, Dong K; Lee, Nam G; You, Joshua H

    2014-01-01

    This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).

  4. Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels

    PubMed Central

    Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V.; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R.

    2018-01-01

    Background: Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods: In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results: The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions: The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods. PMID:29619277

  5. A wavelet-based Bayesian framework for 3D object segmentation in microscopy

    NASA Astrophysics Data System (ADS)

    Pan, Kangyu; Corrigan, David; Hillebrand, Jens; Ramaswami, Mani; Kokaram, Anil

    2012-03-01

    In confocal microscopy, target objects are labeled with fluorescent markers in the living specimen, and usually appear with irregular brightness in the observed images. Also, due to the existence of out-of-focus objects in the image, the segmentation of 3-D objects in the stack of image slices captured at different depth levels of the specimen is still heavily relied on manual analysis. In this paper, a novel Bayesian model is proposed for segmenting 3-D synaptic objects from given image stack. In order to solve the irregular brightness and out-offocus problems, the segmentation model employs a likelihood using the luminance-invariant 'wavelet features' of image objects in the dual-tree complex wavelet domain as well as a likelihood based on the vertical intensity profile of the image stack in 3-D. Furthermore, a smoothness 'frame' prior based on the a priori knowledge of the connections of the synapses is introduced to the model for enhancing the connectivity of the synapses. As a result, our model can successfully segment the in-focus target synaptic object from a 3D image stack with irregular brightness.

  6. Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.

    PubMed

    Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2010-11-01

    Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.

  7. Nucleus detection using gradient orientation information and linear least squares regression

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

    2015-03-01

    Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

  8. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  9. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  10. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    PubMed

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Automatic pelvis segmentation from x-ray images of a mouse model

    NASA Astrophysics Data System (ADS)

    Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham

    2017-05-01

    The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.

  12. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas.

    PubMed

    Alexander, Nathan S; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-08-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE.

  13. Surgical wound segmentation based on adaptive threshold edge detection and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shih, Hsueh-Fu; Ho, Te-Wei; Hsu, Jui-Tse; Chang, Chun-Che; Lai, Feipei; Wu, Jin-Ming

    2017-02-01

    Postsurgical wound care has a great impact on patients' prognosis. It often takes few days, even few weeks, for the wound to stabilize, which incurs a great cost of health care and nursing resources. To assess the wound condition and diagnosis, it is important to segment out the wound region for further analysis. However, the scenario of this strategy often consists of complicated background and noise. In this study, we propose a wound segmentation algorithm based on Canny edge detector and genetic algorithm with an unsupervised evaluation function. The results were evaluated by the 112 clinical images, and 94.3% of images were correctly segmented. The judgment was based on the evaluation of experimented medical doctors. This capability to extract complete wound regions, makes it possible to conduct further image analysis such as intelligent recovery evaluation and automatic infection requirements.

  14. TuMore: generation of synthetic brain tumor MRI data for deep learning based segmentation approaches

    NASA Astrophysics Data System (ADS)

    Lindner, Lydia; Pfarrkirchner, Birgit; Gsaxner, Christina; Schmalstieg, Dieter; Egger, Jan

    2018-03-01

    Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.

  15. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    PubMed

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  16. A medical imaging analysis system for trigger finger using an adaptive texture-based active shape model (ATASM) in ultrasound images

    PubMed Central

    Chuang, Bo-I; Kuo, Li-Chieh; Yang, Tai-Hua; Su, Fong-Chin; Jou, I-Ming; Lin, Wei-Jr; Sun, Yung-Nien

    2017-01-01

    Trigger finger has become a prevalent disease that greatly affects occupational activity and daily life. Ultrasound imaging is commonly used for the clinical diagnosis of trigger finger severity. Due to image property variations, traditional methods cannot effectively segment the finger joint’s tendon structure. In this study, an adaptive texture-based active shape model method is used for segmenting the tendon and synovial sheath. Adapted weights are applied in the segmentation process to adjust the contribution of energy terms depending on image characteristics at different positions. The pathology is then determined according to the wavelet and co-occurrence texture features of the segmented tendon area. In the experiments, the segmentation results have fewer errors, with respect to the ground truth, than contours drawn by regular users. The mean values of the absolute segmentation difference of the tendon and synovial sheath are 3.14 and 4.54 pixels, respectively. The average accuracy of pathological determination is 87.14%. The segmentation results are all acceptable in data of both clear and fuzzy boundary cases in 74 images. And the symptom classifications of 42 cases are also a good reference for diagnosis according to the expert clinicians’ opinions. PMID:29077737

  17. SU-E-J-110: A Novel Level Set Active Contour Algorithm for Multimodality Joint Segmentation/Registration Using the Jensen-Rényi Divergence.

    PubMed

    Markel, D; Naqa, I El; Freeman, C; Vallières, M

    2012-06-01

    To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.

  18. Dispersed Fringe Sensing Analysis - DFSA

    NASA Technical Reports Server (NTRS)

    Sigrist, Norbert; Shi, Fang; Redding, David C.; Basinger, Scott A.; Ohara, Catherine M.; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.; Spechler, Joshua A.

    2012-01-01

    Dispersed Fringe Sensing (DFS) is a technique for measuring and phasing segmented telescope mirrors using a dispersed broadband light image. DFS is capable of breaking the monochromatic light ambiguity, measuring absolute piston errors between segments of large segmented primary mirrors to tens of nanometers accuracy over a range of 100 micrometers or more. The DFSA software tool analyzes DFS images to extract DFS encoded segment piston errors, which can be used to measure piston distances between primary mirror segments of ground and space telescopes. This information is necessary to control mirror segments to establish a smooth, continuous primary figure needed to achieve high optical quality. The DFSA tool is versatile, allowing precise piston measurements from a variety of different optical configurations. DFSA technology may be used for measuring wavefront pistons from sub-apertures defined by adjacent segments (such as Keck Telescope), or from separated sub-apertures used for testing large optical systems (such as sub-aperture wavefront testing for large primary mirrors using auto-collimating flats). An experimental demonstration of the coarse-phasing technology with verification of DFSA was performed at the Keck Telescope. DFSA includes image processing, wavelength and source spectral calibration, fringe extraction line determination, dispersed fringe analysis, and wavefront piston sign determination. The code is robust against internal optical system aberrations and against spectral variations of the source. In addition to the DFSA tool, the software package contains a simple but sophisticated MATLAB model to generate dispersed fringe images of optical system configurations in order to quickly estimate the coarse phasing performance given the optical and operational design requirements. Combining MATLAB (a high-level language and interactive environment developed by MathWorks), MACOS (JPL s software package for Modeling and Analysis for Controlled Optical Systems), and DFSA provides a unique optical development, modeling and analysis package to study current and future approaches to coarse phasing controlled segmented optical systems.

  19. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    PubMed

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  20. Combining deep learning with anatomical analysis for segmentation of the portal vein for liver SBRT planning

    NASA Astrophysics Data System (ADS)

    Ibragimov, Bulat; Toesca, Diego; Chang, Daniel; Koong, Albert; Xing, Lei

    2017-12-01

    Automated segmentation of the portal vein (PV) for liver radiotherapy planning is a challenging task due to potentially low vasculature contrast, complex PV anatomy and image artifacts originated from fiducial markers and vasculature stents. In this paper, we propose a novel framework for automated segmentation of the PV from computed tomography (CT) images. We apply convolutional neural networks (CNNs) to learn the consistent appearance patterns of the PV using a training set of CT images with reference annotations and then enhance the PV in previously unseen CT images. Markov random fields (MRFs) were further used to smooth the results of the enhancement of the CNN enhancement and remove isolated mis-segmented regions. Finally, CNN-MRF-based enhancement was augmented with PV centerline detection that relied on PV anatomical properties such as tubularity and branch composition. The framework was validated on a clinical database with 72 CT images of patients scheduled for liver stereotactic body radiation therapy. The obtained accuracy of the segmentation was DSC= 0.83 and \

  1. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  2. A workflow for the automatic segmentation of organelles in electron microscopy image stacks

    PubMed Central

    Perez, Alex J.; Seyedhosseini, Mojtaba; Deerinck, Thomas J.; Bushong, Eric A.; Panda, Satchidananda; Tasdizen, Tolga; Ellisman, Mark H.

    2014-01-01

    Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime. PMID:25426032

  3. An approach to analyze the breast tissues in infrared images using nonlinear adaptive level sets and Riesz transform features.

    PubMed

    Prabha, S; Suganthi, S S; Sujatha, C M

    2015-01-01

    Breast thermography is a potential imaging method for the early detection of breast cancer. The pathological conditions can be determined by measuring temperature variations in the abnormal breast regions. Accurate delineation of breast tissues is reported as a challenging task due to inherent limitations of infrared images such as low contrast, low signal to noise ratio and absence of clear edges. Segmentation technique is attempted to delineate the breast tissues by detecting proper lower breast boundaries and inframammary folds. Characteristic features are extracted to analyze the asymmetrical thermal variations in normal and abnormal breast tissues. An automated analysis of thermal variations of breast tissues is attempted using nonlinear adaptive level sets and Riesz transform. Breast thermal images are initially subjected to Stein's unbiased risk estimate based orthonormal wavelet denoising. These denoised images are enhanced using contrast-limited adaptive histogram equalization method. The breast tissues are then segmented using non-linear adaptive level set method. The phase map of enhanced image is integrated into the level set framework for final boundary estimation. The segmented results are validated against the corresponding ground truth images using overlap and regional similarity metrics. The segmented images are further processed with Riesz transform and structural texture features are derived from the transformed coefficients to analyze pathological conditions of breast tissues. Results show that the estimated average signal to noise ratio of denoised images and average sharpness of enhanced images are improved by 38% and 6% respectively. The interscale consideration adopted in the denoising algorithm is able to improve signal to noise ratio by preserving edges. The proposed segmentation framework could delineate the breast tissues with high degree of correlation (97%) between the segmented and ground truth areas. Also, the average segmentation accuracy and sensitivity are found to be 98%. Similarly, the maximum regional overlap between segmented and ground truth images obtained using volume similarity measure is observed to be 99%. Directionality as a feature, showed a considerable difference between normal and abnormal tissues which is found to be 11%. The proposed framework for breast thermal image analysis that is aided with necessary preprocessing is found to be useful in assisting the early diagnosis of breast abnormalities.

  4. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  5. Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting

    NASA Astrophysics Data System (ADS)

    Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang

    2016-12-01

    Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.

  6. Understanding the optics to aid microscopy image segmentation.

    PubMed

    Yin, Zhaozheng; Li, Kang; Kanade, Takeo; Chen, Mei

    2010-01-01

    Image segmentation is essential for many automated microscopy image analysis systems. Rather than treating microscopy images as general natural images and rushing into the image processing warehouse for solutions, we propose to study a microscope's optical properties to model its image formation process first using phase contrast microscopy as an exemplar. It turns out that the phase contrast imaging system can be relatively well explained by a linear imaging model. Using this model, we formulate a quadratic optimization function with sparseness and smoothness regularizations to restore the "authentic" phase contrast images that directly correspond to specimen's optical path length without phase contrast artifacts such as halo and shade-off. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on two sequences with thousands of cells captured over several days.

  7. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    PubMed Central

    Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  8. A comparative analysis of the dependences of the hemodynamic parameters on changes in ROI's position in perfusion CT scans

    NASA Astrophysics Data System (ADS)

    Choi, Yong-Seok; Cho, Jae-Hwan; Namgung, Jang-Sun; Kim, Hyo-Jin; Yoon, Dae-Young; Lee, Han-Joo

    2013-05-01

    This study performed a comparative analysis of cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and mean time-to-peak (TTP) obtained by changing the region of interest's (ROI) anatomical positions, during CT brain perfusion. We acquired axial source images of perfusion CT from 20 patients undergoing CT perfusion exams due to brain trauma. Subsequently, the CBV, CBF, MTT, and TTP values were calculated through data-processing of the perfusion CT images. The color scales for the CBV, CBF, MTT, and TTP maps were obtained using the image data. Anterior cerebral artery (ACA) was taken as the standard ROI for the calculations of the perfusion values. Differences in the hemodynamic average values were compared in a quantitative analysis by placing ROI and the dividing axial images into proximal, middle, and distal segments anatomically. By performing the qualitative analysis using a blind test, we observed changes in the sensory characteristics by using the color scales of the CBV, CBF, and MTT maps in the proximal, middle, and distal segments. According to the qualitative analysis, no differences were found in CBV, CBF, MTT, and TTP values of the proximal, middle, and distal segments and no changes were detected in the color scales of the the CBV, CBF, MTT, and TTP maps in the proximal, middle, and distal segments. We anticipate that the results of the study will useful in assessing brain trauma patients using by perfusion imaging.

  9. Segmentation of vessels cluttered with cells using a physics based model.

    PubMed

    Schmugge, Stephen J; Keller, Steve; Nguyen, Nhat; Souvenir, Richard; Huynh, Toan; Clemens, Mark; Shin, Min C

    2008-01-01

    Segmentation of vessels in biomedical images is important as it can provide insight into analysis of vascular morphology, topology and is required for kinetic analysis of flow velocity and vessel permeability. Intravital microscopy is a powerful tool as it enables in vivo imaging of both vasculature and circulating cells. However, the analysis of vasculature in those images is difficult due to the presence of cells and their image gradient. In this paper, we provide a novel method of segmenting vessels with a high level of cell related clutter. A set of virtual point pairs ("vessel probes") are moved reacting to forces including Vessel Vector Flow (VVF) and Vessel Boundary Vector Flow (VBVF) forces. Incorporating the cell detection, the VVF force attracts the probes toward the vessel, while the VBVF force attracts the virtual points of the probes to localize the vessel boundary without being distracted by the image features of the cells. The vessel probes are moved according to Newtonian Physics reacting to the net of forces applied on them. We demonstrate the results on a set of five real in vivo images of liver vasculature cluttered by white blood cells. When compared against the ground truth prepared by the technician, the Root Mean Squared Error (RMSE) of segmentation with VVF and VBVF was 55% lower than the method without VVF and VBVF.

  10. Retina vascular network recognition

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo

    1993-09-01

    The analysis of morphological and structural modifications of the retina vascular network is an interesting investigation method in the study of diabetes and hypertension. Normally this analysis is carried out by qualitative evaluations, according to standardized criteria, though medical research attaches great importance to quantitative analysis of vessel color, shape and dimensions. The paper describes a system which automatically segments and recognizes the ocular fundus circulation and micro circulation network, and extracts a set of features related to morphometric aspects of vessels. For this class of images the classical segmentation methods seem weak. We propose a computer vision system in which segmentation and recognition phases are strictly connected. The system is hierarchically organized in four modules. Firstly the Image Enhancement Module (IEM) operates a set of custom image enhancements to remove blur and to prepare data for subsequent segmentation and recognition processes. Secondly the Papilla Border Analysis Module (PBAM) automatically recognizes number, position and local diameter of blood vessels departing from optical papilla. Then the Vessel Tracking Module (VTM) analyses vessels comparing the results of body and edge tracking and detects branches and crossings. Finally the Feature Extraction Module evaluates PBAM and VTM output data and extracts some numerical indexes. Used algorithms appear to be robust and have been successfully tested on various ocular fundus images.

  11. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    PubMed

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  12. A Fast, Automatic Segmentation Algorithm for Locating and Delineating Touching Cell Boundaries in Imaged Histopathology

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139

  13. Automated segmentation of pulmonary structures in thoracic computed tomography scans: a review

    NASA Astrophysics Data System (ADS)

    van Rikxoort, Eva M.; van Ginneken, Bram

    2013-09-01

    Computed tomography (CT) is the modality of choice for imaging the lungs in vivo. Sub-millimeter isotropic images of the lungs can be obtained within seconds, allowing the detection of small lesions and detailed analysis of disease processes. The high resolution of thoracic CT and the high prevalence of lung diseases require a high degree of automation in the analysis pipeline. The automated segmentation of pulmonary structures in thoracic CT has been an important research topic for over a decade now. This systematic review provides an overview of current literature. We discuss segmentation methods for the lungs, the pulmonary vasculature, the airways, including airway tree construction and airway wall segmentation, the fissures, the lobes and the pulmonary segments. For each topic, the current state of the art is summarized, and topics for future research are identified.

  14. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  15. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features.

    PubMed

    Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo

    2013-10-01

    Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Neutrosophic segmentation of breast lesions for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.

    2017-03-01

    We proposed the neutrosophic approach for segmenting breast lesions in breast Computer Tomography (bCT) images. The neutrosophic set (NS) considers the nature and properties of neutrality (or indeterminacy), which is neither true nor false. We considered the image noise as an indeterminate component, while treating the breast lesion and other breast areas as true and false components. We first transformed the image into the NS domain. Each voxel in the image can be described as its membership in True, Indeterminate, and False sets. Operations α-mean, β-enhancement, and γ-plateau iteratively smooth and contrast-enhance the image to reduce the noise level of the true set. Once the true image no longer changes, we applied one existing algorithm for bCT images, the RGI segmentation, on the resulting image to segment the breast lesions. We compared the segmentation performance of the proposed method (named as NS-RGI) to that of the regular RGI segmentation. We used a total of 122 breast lesions (44 benign, 78 malignant) of 123 non-contrasted bCT cases. We measured the segmentation performances of the NS-RGI and the RGI using the DICE coefficient. The average DICE value of the NS-RGI was 0.82 (STD: 0.09), while that of the RGI was 0.8 (STD: 0.12). The difference between the two DICE values was statistically significant (paired t test, p-value = 0.0007). We conducted a subsequent feature analysis on the resulting segmentations. The classifier performance for the NS-RGI (AUC = 0.8) improved over that of the RGI (AUC = 0.69, p-value = 0.006).

  17. 3-D segmentation of articular cartilages by graph cuts using knee MR images from osteoarthritis initiative

    NASA Astrophysics Data System (ADS)

    Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae

    2008-03-01

    Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.

  18. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    NASA Astrophysics Data System (ADS)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  19. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  20. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    PubMed

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  1. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  2. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  3. Evidential Reasoning in Expert Systems for Image Analysis.

    DTIC Science & Technology

    1985-02-01

    techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths

  4. Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images

    PubMed Central

    Veta, Mitko; van Diest, Paul J.; Kornegoor, Robert; Huisman, André; Viergever, Max A.; Pluim, Josien P. W.

    2013-01-01

    The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. PMID:23922958

  5. Automatic nuclei segmentation in H&E stained breast cancer histopathology images.

    PubMed

    Veta, Mitko; van Diest, Paul J; Kornegoor, Robert; Huisman, André; Viergever, Max A; Pluim, Josien P W

    2013-01-01

    The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.

  6. CMEIAS color segmentation: an improved computing technology to process color images for quantitative microbial ecology studies at single-cell resolution.

    PubMed

    Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B

    2010-02-01

    Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.

  7. Comparison of anterior segment optical coherence tomography angiography and fluorescein angiography for iris vasculature analysis.

    PubMed

    Zett, Claudio; Stina, Deborah M Rosa; Kato, Renata Tiemi; Novais, Eduardo Amorim; Allemann, Norma

    2018-04-01

    The aim of this study is to perform imaging of irises of different colors using spectral domain anterior segment optical coherence tomography angiography (AS-OCTA) and iris fluorescein angiography (IFA) and compare their effectiveness in examining iris vasculature. This is a cross-sectional observational clinical study. Patients with no vascular iris alterations and different pigmentation levels were recruited. Participants were imaged using OCTA adapted with an anterior segment lens and IFA with a confocal scanning laser ophthalmoscope (cSLO) adapted with an anterior segment lens. AS-OCTA and IFA images were then compared. Two blinded readers classified iris pigmentation and compared the percentage of visible vessels between OCTA and IFA images. Twenty eyes of 10 patients with different degrees of iris pigmentation were imaged using AS-OCTA and IFA. Significantly more visible iris vessels were observed using OCTA than using FA (W = 5.22; p < 0.001). Iris pigmentation was negatively correlated to the percentage of visible vessels in both imaging methods (OCTA, rho = - 0.73, p < 0.001; IFA, rho = - 0.77, p < 0.001). Unlike FA, AS-OCTA could not detect leakage of dye, delay, or impregnation. Nystagmus and inadequate fixation along with motion artifacts resulted in lower quality images in AS-OCTA than in IFA. AS-OCTA is a new imaging modality which allows analysis of iris vasculature. In both AS-OCTA and IFA, iris pigmentation caused vasculature imaging blockage, but AS-OCTA provided more detailed iris vasculature images than IFA. Additional studies including different iris pathologies are needed to determine the most optimal scanning parameters in OCTA of the anterior segment.

  8. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation.

    PubMed

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin

    2015-03-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  9. 3D Geometric Analysis of the Pediatric Aorta in 3D MRA Follow-Up Images with Application to Aortic Coarctation.

    PubMed

    Wörz, Stefan; Schenk, Jens-Peter; Alrajab, Abdulsattar; von Tengg-Kobligk, Hendrik; Rohr, Karl; Arnold, Raoul

    2016-10-17

    Coarctation of the aorta is one of the most common congenital heart diseases. Despite different treatment opportunities, long-term outcome after surgical or interventional therapy is diverse. Serial morphologic follow-up of vessel growth is necessary, because vessel growth cannot be predicted by primer morphology or a therapeutic option. For the analysis of the long-term outcome after therapy of congenital diseases such as aortic coarctation, accurate 3D geometric analysis of the aorta from follow-up 3D medical image data such as magnetic resonance angiography (MRA) is important. However, for an objective, fast, and accurate 3D geometric analysis, an automatic approach for 3D segmentation and quantification of the aorta from pediatric images is required. We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model that requires only relatively few model parameters. Moreover, we include a novel adaptive background-masking scheme used for least-squares model fitting, we use a spatial normalization scheme to align the segmentation results from follow-up examinations, and we determine relevant 3D geometric parameters of the aortic arch. We have evaluated our proposed approach using different 3D synthetic images. Moreover, we have successfully applied the approach to follow-up pediatric 3D MRA image data, we have normalized the 3D segmentation results of follow-up images of individual patients, and we have combined the results of all patients. We also present a quantitative evaluation of our approach for four follow-up 3D MRA images of a patient, which confirms that our approach yields accurate 3D segmentation results. An experimental comparison with two previous approaches demonstrates that our approach yields superior results. From the results, we found that our approach is well suited for the quantification of the 3D geometry of the aortic arch from follow-up pediatric 3D MRA image data. In future work, this will enable to investigate the long-term outcome of different surgical and interventional therapies for aortic coarctation.

  10. Segmental hair analysis for differentiation of tilidine intake from external contamination using LC-ESI-MS/MS and MALDI-MS/MS imaging.

    PubMed

    Poetzsch, Michael; Baumgartner, Markus R; Steuer, Andrea E; Kraemer, Thomas

    2015-02-01

    Segmental hair analysis has been used for monitoring changes of consumption habit of drugs. Contamination from the environment or sweat might cause interpretative problems. For this reason, hair analysis results were compared in hair samples taken 24 h and 30 days after a single tilidine dose. The 24-h hair samples already showed high concentrations of tilidine and nortilidine. Analysis of wash water from sample preparation confirmed external contamination by sweat as reason. The 30-day hair samples were still positive for tilidine in all segments. Negative wash-water analysis proved incorporation from sweat into the hair matrix. Interpretation of a forensic case was requested where two children had been administered tilidine by their nanny and tilidine/nortilidine had been detected in all hair segments, possibly indicating multiple applications. Taking into consideration the results of the present study and of MALDI-MS imaging, a single application as cause for analytical results could no longer be excluded. Interpretation of consumption behaviour of tilidine based on segmental hair analysis has to be done with caution, even after typical wash procedures during sample preparation. External sweat contamination followed by incorporation into the hair matrix can mimic chronic intake. For assessment of external contamination, hair samples should not only be collected several weeks but also one to a few days after intake. MALDI-MS imaging of single hair can be a complementary tool for interpretation. Limitations for interpretation of segmental hair analysis shown here might also be applicable to drugs with comparable physicochemical and pharmacokinetic properties. Copyright © 2014 John Wiley & Sons, Ltd.

  11. A new Hessian - based approach for segmentation of CT porous media images

    NASA Astrophysics Data System (ADS)

    Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Kirill, Gerke

    2017-04-01

    Hessian matrix based methods are widely used in image analysis for features detection, e.g., detection of blobs, corners and edges. Hessian matrix of the imageis the matrix of 2nd order derivate around selected voxel. Most significant features give highest values of Hessian transform and lowest values are located at smoother parts of the image. Majority of conventional segmentation techniques can segment out cracks, fractures and other inhomogeneities in soils and rocks only if the rest of the image is significantly "oversigmented". To avoid this disadvantage, we propose to enhance greyscale values of voxels belonging to such specific inhomogeneities on X-ray microtomography scans. We have developed and implemented in code a two-step approach to attack the aforementioned problem. During the first step we apply a filter that enhances the image and makes outstanding features more sharply defined. During the second step we apply Hessian filter based segmentation. The values of voxels on the image to be segmented are calculated in conjunction with the values of other voxels within prescribed region. Contribution from each voxel within such region is computed by weighting according to the local Hessian matrix value. We call this approach as Hessian windowed segmentation. Hessian windowed segmentation has been tested on different porous media X-ray microtomography images, including soil, sandstones, carbonates and shales. We also compared this new method against others widely used methods such as kriging, Markov random field, converging active contours and region grow. We show that our approach is more accurate in regions containing special features such as small cracks, fractures, elongated inhomogeneities and other features with low contrast related to the background solid phase. Moreover, Hessian windowed segmentation outperforms some of these methods in computational efficiency. We further test our segmentation technique by computing permeability of segmented images and comparing them against laboratory based measurements. This work was partially supported by RFBR grant 15-34-20989 (X-ray tomography and image fusion) and RSF grant 14-17-00658 (image segmentation and pore-scale modelling).

  12. Modeling 4D Pathological Changes by Leveraging Normative Models

    PubMed Central

    Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Saha, Avishek; Liu, Wei; Goh, S.Y. Matthew; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido

    2016-01-01

    With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject’s image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies. PMID:27818606

  13. A Multi-center Milestone Study of Clinical Vertebral CT Segmentation

    PubMed Central

    Yao, Jianhua; Burns, Joseph E.; Forsberg, Daniel; Seitel, Alexander; Rasoulian, Abtin; Abolmaesumi, Purang; Hammernik, Kerstin; Urschler, Martin; Ibragimov, Bulat; Korez, Robert; Vrtovec, Tomaž; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Summers, Ronald M.; Li, Shuo

    2017-01-01

    A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention. PMID:26878138

  14. Automated segmentation of the lungs from high resolution CT images for quantitative study of chronic obstructive pulmonary diseases

    NASA Astrophysics Data System (ADS)

    Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.

    2005-04-01

    Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.

  15. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  16. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  17. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  18. Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials

    DOEpatents

    Boucheron, Laura E

    2013-07-16

    Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.

  19. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhbardeh, Alireza; Jacobs, Michael A.; Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), andmore » diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.« less

  20. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  1. The method of segmentation of leukocytes in information-measuring systems on the basis of light microscopy

    NASA Astrophysics Data System (ADS)

    Nikitaev, V. G.; Pronichev, A. N.; Polyakov, E. V.; Zaharenko, Yu V.

    2018-01-01

    The paper considers the problem of leukocytes segmentation in microscopic images of bone marrow smears for automated diagnosis of the blood system diseases. The method was proposed to solve the problem of segmentation of contacting leukocytes in images of bone marrow smears. The method is based on the analysis of structure of objects of a separation and distances filter in combination with the watershed method and distance transformation method.

  2. Cerebrovascular plaque segmentation using object class uncertainty snake in MR images

    NASA Astrophysics Data System (ADS)

    Das, Bipul; Saha, Punam K.; Wolf, Ronald; Song, Hee Kwon; Wright, Alexander C.; Wehrli, Felix W.

    2005-04-01

    Atherosclerotic cerebrovascular disease leads to formation of lipid-laden plaques that can form emboli when ruptured causing blockage to cerebral vessels. The clinical manifestation of this event sequence is stroke; a leading cause of disability and death. In vivo MR imaging provides detailed image of vascular architecture for the carotid artery making it suitable for analysis of morphological features. Assessing the status of carotid arteries that supplies blood to the brain is of primary interest to such investigations. Reproducible quantification of carotid artery dimensions in MR images is essential for plaque analysis. Manual segmentation being the only method presently makes it time consuming and sensitive to inter and intra observer variability. This paper presents a deformable model for lumen and vessel wall segmentation of carotid artery from MR images. The major challenges of carotid artery segmentation are (a) low signal-to-noise ratio, (b) background intensity inhomogeneity and (c) indistinct inner and/or outer vessel wall. We propose a new, effective object-class uncertainty based deformable model with additional features tailored toward this specific application. Object-class uncertainty optimally utilizes MR intensity characteristics of various anatomic entities that enable the snake to avert leakage through fuzzy boundaries. To strengthen the deformable model for this application, some other properties are attributed to it in the form of (1) fully arc-based deformation using a Gaussian model to maximally exploit vessel wall smoothness, (2) construction of a forbidden region for outer-wall segmentation to reduce interferences by prominent lumen features and (3) arc-based landmark for efficient user interaction. The algorithm has been tested upon T1- and PD- weighted images. Measures of lumen area and vessel wall area are computed from segmented data of 10 patient MR images and their accuracy and reproducibility are examined. These results correspond exceptionally well with manual segmentation completed by radiology experts. Reproducibility of the proposed method is estimated for both intra- and inter-operator studies.

  3. Improving left ventricular segmentation in four-dimensional flow MRI using intramodality image registration for cardiac blood flow analysis.

    PubMed

    Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino

    2018-01-01

    Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P < 0.01). Blood flow analysis from manual and automatically corrected segmentations did not differ significantly (P > 0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Hybrid region merging method for segmentation of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo

    2014-12-01

    Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.

  5. Patellar segmentation from 3D magnetic resonance images using guided recursive ray-tracing for edge pattern detection

    NASA Astrophysics Data System (ADS)

    Cheng, Ruida; Jackson, Jennifer N.; McCreedy, Evan S.; Gandler, William; Eijkenboom, J. J. F. A.; van Middelkoop, M.; McAuliffe, Matthew J.; Sheehan, Frances T.

    2016-03-01

    The paper presents an automatic segmentation methodology for the patellar bone, based on 3D gradient recalled echo and gradient recalled echo with fat suppression magnetic resonance images. Constricted search space outlines are incorporated into recursive ray-tracing to segment the outer cortical bone. A statistical analysis based on the dependence of information in adjacent slices is used to limit the search in each image to between an outer and inner search region. A section based recursive ray-tracing mechanism is used to skip inner noise regions and detect the edge boundary. The proposed method achieves higher segmentation accuracy (0.23mm) than the current state-of-the-art methods with the average dice similarity coefficient of 96.0% (SD 1.3%) agreement between the auto-segmentation and ground truth surfaces.

  6. Automatic MRI 2D brain segmentation using graph searching technique.

    PubMed

    Pedoia, Valentina; Binaghi, Elisabetta

    2013-09-01

    Accurate and efficient segmentation of the whole brain in magnetic resonance (MR) images is a key task in many neuroscience and medical studies either because the whole brain is the final anatomical structure of interest or because the automatic extraction facilitates further analysis. The problem of segmenting brain MRI images has been extensively addressed by many researchers. Despite the relevant achievements obtained, automated segmentation of brain MRI imagery is still a challenging problem whose solution has to cope with critical aspects such as anatomical variability and pathological deformation. In the present paper, we describe and experimentally evaluate a method for segmenting brain from MRI images basing on two-dimensional graph searching principles for border detection. The segmentation of the whole brain over the entire volume is accomplished slice by slice, automatically detecting frames including eyes. The method is fully automatic and easily reproducible by computing the internal main parameters directly from the image data. The segmentation procedure is conceived as a tool of general applicability, although design requirements are especially commensurate with the accuracy required in clinical tasks such as surgical planning and post-surgical assessment. Several experiments were performed to assess the performance of the algorithm on a varied set of MRI images obtaining good results in terms of accuracy and stability. Copyright © 2012 John Wiley & Sons, Ltd.

  7. A Semiautomatic Method for Multiple Sclerosis Lesion Segmentation on Dual-Echo MR Imaging: Application in a Multicenter Context.

    PubMed

    Storelli, L; Pagani, E; Rocca, M A; Horsfield, M A; Gallo, A; Bisecco, A; Battaglini, M; De Stefano, N; Vrenken, H; Thomas, D L; Mancini, L; Ropele, S; Enzinger, C; Preziosa, P; Filippi, M

    2016-07-21

    The automatic segmentation of MS lesions could reduce time required for image processing together with inter- and intraoperator variability for research and clinical trials. A multicenter validation of a proposed semiautomatic method for hyperintense MS lesion segmentation on dual-echo MR imaging is presented. The classification technique used is based on a region-growing approach starting from manual lesion identification by an expert observer with a final segmentation-refinement step. The method was validated in a cohort of 52 patients with relapsing-remitting MS, with dual-echo images acquired in 6 different European centers. We found a mathematic expression that made the optimization of the method independent of the need for a training dataset. The automatic segmentation was in good agreement with the manual segmentation (dice similarity coefficient = 0.62 and root mean square error = 2 mL). Assessment of the segmentation errors showed no significant differences in algorithm performance between the different MR scanner manufacturers (P > .05). The method proved to be robust, and no center-specific training of the algorithm was required, offering the possibility for application in a clinical setting. Adoption of the method should lead to improved reliability and less operator time required for image analysis in research and clinical trials in MS. © 2016 American Society of Neuroradiology.

  8. Myocardial Iron Loading Assessment by Automatic Left Ventricle Segmentation with Morphological Operations and Geodesic Active Contour on T2* images

    NASA Astrophysics Data System (ADS)

    Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng

    2015-07-01

    Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.

  9. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  10. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    PubMed

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  11. Nucleus and cytoplasm segmentation in microscopic images using K-means clustering and region growing.

    PubMed

    Sarrafzadeh, Omid; Dehnavi, Alireza Mehri

    2015-01-01

    Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection.

  12. Nucleus and cytoplasm segmentation in microscopic images using K-means clustering and region growing

    PubMed Central

    Sarrafzadeh, Omid; Dehnavi, Alireza Mehri

    2015-01-01

    Background: Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. Materials and Methods: The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. Results: The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. Conclusions: In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection. PMID:26605213

  13. A JOINT FRAMEWORK FOR 4D SEGMENTATION AND ESTIMATION OF SMOOTH TEMPORAL APPEARANCE CHANGES.

    PubMed

    Gao, Yang; Prastawa, Marcel; Styner, Martin; Piven, Joseph; Gerig, Guido

    2014-04-01

    Medical imaging studies increasingly use longitudinal images of individual subjects in order to follow-up changes due to development, degeneration, disease progression or efficacy of therapeutic intervention. Repeated image data of individuals are highly correlated, and the strong causality of information over time lead to the development of procedures for joint segmentation of the series of scans, called 4D segmentation. A main aim was improved consistency of quantitative analysis, most often solved via patient-specific atlases. Challenging open problems are contrast changes and occurance of subclasses within tissue as observed in multimodal MRI of infant development, neurodegeneration and disease. This paper proposes a new 4D segmentation framework that enforces continuous dynamic changes of tissue contrast patterns over time as observed in such data. Moreover, our model includes the capability to segment different contrast patterns within a specific tissue class, for example as seen in myelinated and unmyelinated white matter regions in early brain development. Proof of concept is shown with validation on synthetic image data and with 4D segmentation of longitudinal, multimodal pediatric MRI taken at 6, 12 and 24 months of age, but the methodology is generic w.r.t. different application domains using serial imaging.

  14. Computer-aided pulmonary image analysis in small animal models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next.more » The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.« less

  15. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  16. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  17. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  18. Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Zong-shou; Li, Jin-wei

    2014-12-01

    Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.

  19. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes

    PubMed Central

    Erkol, Bulent; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Hvatum, Erik

    2011-01-01

    Background Malignant melanoma has a good prognosis if treated early. Dermoscopy images of pigmented lesions are most commonly taken at × 10 magnification under lighting at a low angle of incidence while the skin is immersed in oil under a glass plate. Accurate skin lesion segmentation from the background skin is important because some of the features anticipated to be used for diagnosis deal with shape of the lesion and others deal with the color of the lesion compared with the color of the surrounding skin. Methods In this research, gradient vector flow (GVF) snakes are investigated to find the border of skin lesions in dermoscopy images. An automatic initialization method is introduced to make the skin lesion border determination process fully automated. Results Skin lesion segmentation results are presented for 70 benign and 30 melanoma skin lesion images for the GVF-based method and a color histogram analysis technique. The average errors obtained by the GVF-based method are lower for both the benign and melanoma image sets than for the color histogram analysis technique based on comparison with manually segmented lesions determined by a dermatologist. Conclusions The experimental results for the GVF-based method demonstrate promise as an automated technique for skin lesion segmentation in dermoscopy images. PMID:15691255

  20. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  1. Magnetic resonance T1 gradient-echo imaging in hepatolithiasis.

    PubMed

    Safar, F; Kamura, T; Okamuto, K; Sasai, K; Gejyo, F

    2005-01-01

    We examined the role of magnetic resonance T1-weighted gradient-echo (MRT1-GE) imaging in hepatolithiasis. MRT1-GE, precontrast computed tomography (CT), and magnetic resonance cholangiopancreatography (MRCP) of 10 patients with hepatolithiasis were compared for their diagnostic accuracies in the detection and localization of intrahepatic calculi. The diagnosis of hepatolithiasis was confirmed by surgery. For localization of the stone, we divided the bile ducts into six areas: right and left hepatic ducts and bile ducts of the lateral, medial, right anterior, and right posterior segments of the liver. Chemical analysis of the stones was performed in eight patients. The total number of segments proved by surgery to contain stones was 18. Although not significantly different, the sensitivity of MRT1-GE was 77.8% (14 of 18 segments), higher than that of MRCP (66.7%, 12 of 18 segments) and that of CT (50%, nine of 18 segments). The sensitivity of magnetic resonance imaging (MRCP + MRT1) was significantly higher than that of CT (p < 0.01). Multiple logistic regression analysis showed that the result of surgery was significantly affected only by the result of magnetic resonance imaging. On MRT1-GE, all the depicted stones appeared as high-intensity signal areas within the low-intensity bile duct irrespective of their chemical composition. MRT1-GE imaging provides complementary information concerning hepatolithiasis.

  2. NeuroSeg: automated cell detection and segmentation for in vivo two-photon Ca2+ imaging data.

    PubMed

    Guan, Jiangheng; Li, Jingcheng; Liang, Shanshan; Li, Ruijie; Li, Xingyi; Shi, Xiaozhe; Huang, Ciyu; Zhang, Jianxiong; Pan, Junxia; Jia, Hongbo; Zhang, Le; Chen, Xiaowei; Liao, Xiang

    2018-01-01

    Two-photon Ca 2+ imaging has become a popular approach for monitoring neuronal population activity with cellular or subcellular resolution in vivo. This approach allows for the recording of hundreds to thousands of neurons per animal and thus leads to a large amount of data to be processed. In particular, manually drawing regions of interest is the most time-consuming aspect of data analysis. However, the development of automated image analysis pipelines, which will be essential for dealing with the likely future deluge of imaging data, remains a major challenge. To address this issue, we developed NeuroSeg, an open-source MATLAB program that can facilitate the accurate and efficient segmentation of neurons in two-photon Ca 2+ imaging data. We proposed an approach using a generalized Laplacian of Gaussian filter to detect cells and weighting-based segmentation to separate individual cells from the background. We tested this approach on an in vivo two-photon Ca 2+ imaging dataset obtained from mouse cortical neurons with differently sized view fields. We show that this approach exhibits superior performance for cell detection and segmentation compared with the existing published tools. In addition, we integrated the previously reported, activity-based segmentation into our approach and found that this combined method was even more promising. The NeuroSeg software, including source code and graphical user interface, is freely available and will be a useful tool for in vivo brain activity mapping.

  3. Range image segmentation using Zernike moment-based generalized edge detector

    NASA Technical Reports Server (NTRS)

    Ghosal, S.; Mehrotra, R.

    1992-01-01

    The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.

  4. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    PubMed

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  5. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  6. Semi-automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    PubMed Central

    Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867

  7. Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less

  8. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.

    PubMed

    Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike

    2010-01-01

    An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.

  9. Prostate segmentation in MR images using discriminant boundary features.

    PubMed

    Yang, Meijuan; Li, Xuelong; Turkbey, Baris; Choyke, Peter L; Yan, Pingkun

    2013-02-01

    Segmentation of the prostate in magnetic resonance image has become more in need for its assistance to diagnosis and surgical planning of prostate carcinoma. Due to the natural variability of anatomical structures, statistical shape model has been widely applied in medical image segmentation. Robust and distinctive local features are critical for statistical shape model to achieve accurate segmentation results. The scale invariant feature transformation (SIFT) has been employed to capture the information of the local patch surrounding the boundary. However, when SIFT feature being used for segmentation, the scale and variance are not specified with the location of the point of interest. To deal with it, the discriminant analysis in machine learning is introduced to measure the distinctiveness of the learned SIFT features for each landmark directly and to make the scale and variance adaptive to the locations. As the gray values and gradients vary significantly over the boundary of the prostate, separate appearance descriptors are built for each landmark and then optimized. After that, a two stage coarse-to-fine segmentation approach is carried out by incorporating the local shape variations. Finally, the experiments on prostate segmentation from MR image are conducted to verify the efficiency of the proposed algorithms.

  10. Computer-Aided Diagnosis of Anterior Segment Eye Abnormalities using Visible Wavelength Image Analysis Based Machine Learning.

    PubMed

    S V, Mahesh Kumar; R, Gunasundari

    2018-06-02

    Eye disease is a major health problem among the elderly people. Cataract and corneal arcus are the major abnormalities that exist in the anterior segment eye region of aged people. Hence, computer-aided diagnosis of anterior segment eye abnormalities will be helpful for mass screening and grading in ophthalmology. In this paper, we propose a multiclass computer-aided diagnosis (CAD) system using visible wavelength (VW) eye images to diagnose anterior segment eye abnormalities. In the proposed method, the input VW eye images are pre-processed for specular reflection removal and the iris circle region is segmented using a circular Hough Transform (CHT)-based approach. The first-order statistical features and wavelet-based features are extracted from the segmented iris circle and used for classification. The Support Vector Machine (SVM) by Sequential Minimal Optimization (SMO) algorithm was used for the classification. In experiments, we used 228 VW eye images that belong to three different classes of anterior segment eye abnormalities. The proposed method achieved a predictive accuracy of 96.96% with 97% sensitivity and 99% specificity. The experimental results show that the proposed method has significant potential for use in clinical applications.

  11. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  12. Brachial artery vasomotion and transducer pressure effect on measurements by active contour segmentation on ultrasound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Theodore W.; Sultan, Laith R.; Sehgal, Chandra M., E-mail: sehgalc@uphs.upenn.edu

    Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced eachmore » phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.« less

  13. Brachial artery vasomotion and transducer pressure effect on measurements by active contour segmentation on ultrasound.

    PubMed

    Cary, Theodore W; Reamer, Courtney B; Sultan, Laith R; Mohler, Emile R; Sehgal, Chandra M

    2014-02-01

    To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.

  14. Brachial artery vasomotion and transducer pressure effect on measurements by active contour segmentation on ultrasound

    PubMed Central

    Cary, Theodore W.; Reamer, Courtney B.; Sultan, Laith R.; Mohler, Emile R.; Sehgal, Chandra M.

    2014-01-01

    Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging. PMID:24506648

  15. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  16. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  17. Fast Segmentation of Stained Nuclei in Terabyte-Scale, Time Resolved 3D Microscopy Image Stacks

    PubMed Central

    Stegmaier, Johannes; Otte, Jens C.; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G. Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu’s method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm’s superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results. PMID:24587204

  18. Liver CT image processing: a short introduction of the technical elements.

    PubMed

    Masutani, Y; Uozumi, K; Akahane, Masaaki; Ohtomo, Kuni

    2006-05-01

    In this paper, we describe the technical aspects of image analysis for liver diagnosis and treatment, including the state-of-the-art of liver image analysis and its applications. After discussion on modalities for liver image analysis, various technical elements for liver image analysis such as registration, segmentation, modeling, and computer-assisted detection are covered with examples performed with clinical data sets. Perspective in the imaging technologies is also reviewed and discussed.

  19. Automated segmentation and isolation of touching cell nuclei in cytopathology smear images of pleural effusion using distance transform watershed method

    NASA Astrophysics Data System (ADS)

    Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko

    2017-06-01

    The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.

  20. A scale-based connected coherence tree algorithm for image segmentation.

    PubMed

    Ding, Jundi; Ma, Runing; Chen, Songcan

    2008-02-01

    This paper presents a connected coherence tree algorithm (CCTA) for image segmentation with no prior knowledge. It aims to find regions of semantic coherence based on the proposed epsilon-neighbor coherence segmentation criterion. More specifically, with an adaptive spatial scale and an appropriate intensity-difference scale, CCTA often achieves several sets of coherent neighboring pixels which maximize the probability of being a single image content (including kinds of complex backgrounds). In practice, each set of coherent neighboring pixels corresponds to a coherence class (CC). The fact that each CC just contains a single equivalence class (EC) ensures the separability of an arbitrary image theoretically. In addition, the resultant CCs are represented by tree-based data structures, named connected coherence tree (CCT)s. In this sense, CCTA is a graph-based image analysis algorithm, which expresses three advantages: 1) its fundamental idea, epsilon-neighbor coherence segmentation criterion, is easy to interpret and comprehend; 2) it is efficient due to a linear computational complexity in the number of image pixels; 3) both subjective comparisons and objective evaluation have shown that it is effective for the tasks of semantic object segmentation and figure-ground separation in a wide variety of images. Those images either contain tiny, long and thin objects or are severely degraded by noise, uneven lighting, occlusion, poor illumination, and shadow.

  1. Segmentation of breast ultrasound images based on active contours using neutrosophic theory.

    PubMed

    Lotfollahi, Mahsa; Gity, Masoumeh; Ye, Jing Yong; Mahlooji Far, A

    2018-04-01

    Ultrasound imaging is an effective approach for diagnosing breast cancer, but it is highly operator-dependent. Recent advances in computer-aided diagnosis have suggested that it can assist physicians in diagnosis. Definition of the region of interest before computer analysis is still needed. Since manual outlining of the tumor contour is tedious and time-consuming for a physician, developing an automatic segmentation method is important for clinical application. The present paper represents a novel method to segment breast ultrasound images. It utilizes a combination of region-based active contour and neutrosophic theory to overcome the natural properties of ultrasound images including speckle noise and tissue-related textures. First, due to inherent speckle noise and low contrast of these images, we have utilized a non-local means filter and fuzzy logic method for denoising and image enhancement, respectively. This paper presents an improved weighted region-scalable active contour to segment breast ultrasound images using a new feature derived from neutrosophic theory. This method has been applied to 36 breast ultrasound images. It generates true-positive and false-positive results, and similarity of 95%, 6%, and 90%, respectively. The purposed method indicates clear advantages over other conventional methods of active contour segmentation, i.e., region-scalable fitting energy and weighted region-scalable fitting energy.

  2. Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.

    PubMed

    Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos

    2008-03-01

    Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.

  3. Use of Mechanical Turk as a MapReduce Framework for Macular OCT Segmentation.

    PubMed

    Lee, Aaron Y; Lee, Cecilia S; Keane, Pearse A; Tufail, Adnan

    2016-01-01

    Purpose. To evaluate the feasibility of using Mechanical Turk as a massively parallel platform to perform manual segmentations of macular spectral domain optical coherence tomography (SD-OCT) images using a MapReduce framework. Methods. A macular SD-OCT volume of 61 slice images was map-distributed to Amazon Mechanical Turk. Each Human Intelligence Task was set to $0.01 and required the user to draw five lines to outline the sublayers of the retinal OCT image after being shown example images. Each image was submitted twice for segmentation, and interrater reliability was calculated. The interface was created using custom HTML5 and JavaScript code, and data analysis was performed using R. An automated pipeline was developed to handle the map and reduce steps of the framework. Results. More than 93,500 data points were collected using this framework for the 61 images submitted. Pearson's correlation of interrater reliability was 0.995 (p < 0.0001) and coefficient of determination was 0.991. The cost of segmenting the macular volume was $1.21. A total of 22 individual Mechanical Turk users provided segmentations, each completing an average of 5.5 HITs. Each HIT was completed in an average of 4.43 minutes. Conclusions. Amazon Mechanical Turk provides a cost-effective, scalable, high-availability infrastructure for manual segmentation of OCT images.

  4. Use of Mechanical Turk as a MapReduce Framework for Macular OCT Segmentation

    PubMed Central

    Lee, Aaron Y.; Lee, Cecilia S.; Keane, Pearse A.; Tufail, Adnan

    2016-01-01

    Purpose. To evaluate the feasibility of using Mechanical Turk as a massively parallel platform to perform manual segmentations of macular spectral domain optical coherence tomography (SD-OCT) images using a MapReduce framework. Methods. A macular SD-OCT volume of 61 slice images was map-distributed to Amazon Mechanical Turk. Each Human Intelligence Task was set to $0.01 and required the user to draw five lines to outline the sublayers of the retinal OCT image after being shown example images. Each image was submitted twice for segmentation, and interrater reliability was calculated. The interface was created using custom HTML5 and JavaScript code, and data analysis was performed using R. An automated pipeline was developed to handle the map and reduce steps of the framework. Results. More than 93,500 data points were collected using this framework for the 61 images submitted. Pearson's correlation of interrater reliability was 0.995 (p < 0.0001) and coefficient of determination was 0.991. The cost of segmenting the macular volume was $1.21. A total of 22 individual Mechanical Turk users provided segmentations, each completing an average of 5.5 HITs. Each HIT was completed in an average of 4.43 minutes. Conclusions. Amazon Mechanical Turk provides a cost-effective, scalable, high-availability infrastructure for manual segmentation of OCT images. PMID:27293877

  5. Gated-SPECT myocardial perfusion imaging as a complementary technique to magnetic resonance imaging in chronic myocardial infarction patients.

    PubMed

    Cuberas-Borrós, Gemma; Pineda, Victor; Aguadé-Bruix, Santiago; Romero-Farina, Guillermo; Pizzi, M Nazarena; de León, Gustavo; Castell-Conesa, Joan; García-Dorado, David; Candell-Riera, Jaume

    2013-09-01

    The aim of this study was to compare magnetic resonance and gated-SPECT myocardial perfusion imaging in patients with chronic myocardial infarction. Magnetic resonance imaging and gated-SPECT were performed in 104 patients (mean age, 61 [12] years; 87.5% male) with a previous infarction. Left ventricular volumes and ejection fraction and classic late gadolinium enhancement viability criteria (<75% transmurality) were correlated with those of gated-SPECT (uptake >50%) in the 17 segments of the left ventricle. Motion, thickening, and ischemia on SPECT were analyzed in segments showing nonviable tissue or equivocal enhancement features (50%-75% transmurality). A good correlation was observed between the 2 techniques for volumes, ejection fraction (P<.05), and estimated necrotic mass (P<.01). In total, 82 of 264 segments (31%) with >75% enhancement had >50% single SPECT uptake. Of the 106 equivocal segments on magnetic resonance imaging, 68 (64%) had >50% uptake, 41 (38.7%) had normal motion, 46 (43.4%) had normal thickening, and 17 (16%) had ischemic criteria on SPECT. A third of nonviable segments on magnetic resonance imaging showed >50% uptake on SPECT. Gated-SPECT can be useful in the analysis of motion, thickening, and ischemic criteria in segments with questionable viability on magnetic resonance imaging. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  6. Repeatability of Non-Contrast-Enhanced Lower-Extremity Angiography Using the Flow-Spoiled Fresh Blood Imaging.

    PubMed

    Zhang, Yuyang; Xing, Zhen; She, Dejun; Huang, Nan; Cao, Dairong

    The aim of this study was to prospectively evaluate the repeatability of non-contrast-enhanced lower-extremity magnetic resonance angiography using the flow-spoiled fresh blood imaging (FS-FBI). Forty-three healthy volunteers and 15 patients with lower-extremity arterial stenosis were recruited in this study and were examined by FS-FBI. Digital subtraction angiography was performed within a week after the FS-FBI in the patient group. Repeatability was assessed by the following parameters: grading of image quality, diameter and area of major arteries, and grading of stenosis of lower-extremity arteries. Two experienced radiologists blinded for patient data independently evaluated the FS-FBI and digital subtraction angiography images. Intraclass correlation coefficients (ICCs), sensitivity, and specificity were used for statistical analysis. The grading of image quality of most data was satisfactory. The ICCs for the first and second measures were 0.792 and 0.884 in the femoral segment and 0.803 and 0.796 in the tibiofibular segment for healthy volunteer group, 0.873 and 1.000 in the femoral segment, and 0.737 and 0.737 in the tibiofibular segment for the patient group. Intraobserver and interobserver agreements on diameter and area of arteries were excellent, with ICCs mostly greater than 0.75 in the volunteer group. For stenosis grading analysis, intraobserver ICCs range from 0.784 to 0.862 and from 0.778 to 0.854, respectively. Flow-spoiled fresh blood imaging yielded a mean sensitivity and specificity to detect arterial stenosis or occlusion of 90% and 80% for femoral segment and 86.7% and 93.3% for tibiofibular segment at least. Lower-extremity angiography with FS-FBI is a reliable and reproducible screening tool for lower-extremity atherosclerotic disease, especially for patients with impaired renal function.

  7. Segmentation Of Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Rama

    1994-01-01

    Report presents one in continuing series of studies of segmentation of polarimetric synthetic-aperture-radar, SAR, image data into regions. Studies directed toward refinement of method of automated analysis of SAR data.

  8. Automatic segmentation of the bone and extraction of the bone cartilage interface from magnetic resonance images of the knee

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2007-03-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  9. Representation learning: a unified deep learning framework for automatic prostate MR segmentation.

    PubMed

    Liao, Shu; Gao, Yaozong; Oto, Aytekin; Shen, Dinggang

    2013-01-01

    Image representation plays an important role in medical image analysis. The key to the success of different medical image analysis algorithms is heavily dependent on how we represent the input data, namely features used to characterize the input image. In the literature, feature engineering remains as an active research topic, and many novel hand-crafted features are designed such as Haar wavelet, histogram of oriented gradient, and local binary patterns. However, such features are not designed with the guidance of the underlying dataset at hand. To this end, we argue that the most effective features should be designed in a learning based manner, namely representation learning, which can be adapted to different patient datasets at hand. In this paper, we introduce a deep learning framework to achieve this goal. Specifically, a stacked independent subspace analysis (ISA) network is adopted to learn the most effective features in a hierarchical and unsupervised manner. The learnt features are adapted to the dataset at hand and encode high level semantic anatomical information. The proposed method is evaluated on the application of automatic prostate MR segmentation. Experimental results show that significant segmentation accuracy improvement can be achieved by the proposed deep learning method compared to other state-of-the-art segmentation approaches.

  10. Automated analysis and classification of melanocytic tumor on skin whole slide images.

    PubMed

    Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal

    2018-06-01

    This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Development and Evaluation of a Semi-automated Segmentation Tool and a Modified Ellipsoid Formula for Volumetric Analysis of the Kidney in Non-contrast T2-Weighted MR Images.

    PubMed

    Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias

    2017-04-01

    Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.

  12. Bacterial cell identification in differential interference contrast microscopy images.

    PubMed

    Obara, Boguslaw; Roberts, Mark A J; Armitage, Judith P; Grau, Vicente

    2013-04-23

    Microscopy image segmentation lays the foundation for shape analysis, motion tracking, and classification of biological objects. Despite its importance, automated segmentation remains challenging for several widely used non-fluorescence, interference-based microscopy imaging modalities. For example in differential interference contrast microscopy which plays an important role in modern bacterial cell biology. Therefore, new revolutions in the field require the development of tools, technologies and work-flows to extract and exploit information from interference-based imaging data so as to achieve new fundamental biological insights and understanding. We have developed and evaluated a high-throughput image analysis and processing approach to detect and characterize bacterial cells and chemotaxis proteins. Its performance was evaluated using differential interference contrast and fluorescence microscopy images of Rhodobacter sphaeroides. Results demonstrate that the proposed approach provides a fast and robust method for detection and analysis of spatial relationship between bacterial cells and their chemotaxis proteins.

  13. Probabilistic atlas and geometric variability estimation to drive tissue segmentation.

    PubMed

    Xu, Hao; Thirion, Bertrand; Allassonnière, Stéphanie

    2014-09-10

    Computerized anatomical atlases play an important role in medical image analysis. While an atlas usually refers to a standard or mean image also called template, which presumably represents well a given population, it is not enough to characterize the observed population in detail. A template image should be learned jointly with the geometric variability of the shapes represented in the observations. These two quantities will in the sequel form the atlas of the corresponding population. The geometric variability is modeled as deformations of the template image so that it fits the observations. In this paper, we provide a detailed analysis of a new generative statistical model based on dense deformable templates that represents several tissue types observed in medical images. Our atlas contains both an estimation of probability maps of each tissue (called class) and the deformation metric. We use a stochastic algorithm for the estimation of the probabilistic atlas given a dataset. This atlas is then used for atlas-based segmentation method to segment the new images. Experiments are shown on brain T1 MRI datasets. Copyright © 2014 John Wiley & Sons, Ltd.

  14. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  15. Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2

    PubMed Central

    Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.

    2014-01-01

    SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745

  16. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    PubMed

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  17. A novel content-based active contour model for brain tumor segmentation.

    PubMed

    Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal

    2012-06-01

    Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Coupled dictionary learning for joint MR image restoration and segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xuesong; Fan, Yong

    2018-03-01

    To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.

  19. Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review.

    PubMed

    Xing, Fuyong; Yang, Lin

    2016-01-01

    Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.

  20. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  1. Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.

    PubMed

    Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J

    2012-09-01

    Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.

  2. A top-down manner-based DCNN architecture for semantic image segmentation.

    PubMed

    Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin

    2017-01-01

    Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.

  3. A hybrid 3D region growing and 4D curvature analysis-based automatic abdominal blood vessel segmentation through contrast enhanced CT

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2017-03-01

    In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.

  4. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  5. Myocardial multilayer strain does not provide additional value for detection of myocardial viability assessed by SPECT imaging over and beyond standard strain.

    PubMed

    Orloff, Elisabeth; Fournier, Pauline; Bouisset, Frédéric; Moine, Thomas; Cournot, Maxime; Elbaz, Meyer; Carrié, Didier; Galinier, Michel; Lairez, Olivier; Cognet, Thomas

    2018-05-14

    The aim of this study was to evaluate the value of multilayer strain analysis to the assessment of myocardial viability (MV) through the comparison of both speckle tracking echocardiography and single-photon emission computed tomography (SPECT) imaging. We also intended to determine which segmental longitudinal strain (LS) cutoff value would be optimal to discriminate viable myocardium. We included 47 patients (average age: 61 ± 11 years) referred to our cardiac imaging center for MV evaluation. All patients underwent transthoracic echocardiography with measures of LS, SPECT, and coronary angiography. In all, 799 segments were analyzed. We correlated myocardial tracer uptake by SPECT with sub-endocardial, sub-epicardial, and mid-segmental LS values with r = .514 P < .0001, r = .501 P < .0001, and r = .520 P < .0001, respectively. The measurements of each layer strain (sub-endocardial, sub-epicardial, and mid) had the same performance to predict MV viability as defined by SPECT with areas under curve of 0.819 [0.778-0.861, P < .0001], 0.809 [0.764-0.854, P < .0001], and 0.817 [0.773-0.860, P < .0001], respectively. The receiver-operating characteristic analysis yielded a cutoff value of -6.5% for mid-segmental LS with a sensitivity of 76% and specificity of 76% to predict segmental MV as defined by SPECT. Multilayer strain analysis does not evaluate MV with more accuracy than standard segmental LS analysis. © 2018 Wiley Periodicals, Inc.

  6. Model-Based Segmentation of Cortical Regions of Interest for Multi-subject Analysis of fMRI Data

    NASA Astrophysics Data System (ADS)

    Engel, Karin; Brechmann, Andr'e.; Toennies, Klaus

    The high inter-subject variability of human neuroanatomy complicates the analysis of functional imaging data across subjects. We propose a method for the correct segmentation of cortical regions of interest based on the cortical surface. First results on the segmentation of Heschl's gyrus indicate the capability of our approach for correct comparison of functional activations in relation to individual cortical patterns.

  7. Computer Aided Solution for Automatic Segmenting and Measurements of Blood Leucocytes Using Static Microscope Images.

    PubMed

    Abdulhay, Enas; Mohammed, Mazin Abed; Ibrahim, Dheyaa Ahmed; Arunkumar, N; Venkatraman, V

    2018-02-17

    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.

  8. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    PubMed Central

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe

    2015-01-01

    Purpose We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). Materials and Methods The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. Results VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). Conclusion It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method. PMID:25793178

  9. Automated analysis of time-lapse fluorescence microscopy images: from live cell images to intracellular foci.

    PubMed

    Dzyubachyk, Oleh; Essers, Jeroen; van Cappellen, Wiggert A; Baldeyron, Céline; Inagaki, Akiko; Niessen, Wiro J; Meijering, Erik

    2010-10-01

    Complete, accurate and reproducible analysis of intracellular foci from fluorescence microscopy image sequences of live cells requires full automation of all processing steps involved: cell segmentation and tracking followed by foci segmentation and pattern analysis. Integrated systems for this purpose are lacking. Extending our previous work in cell segmentation and tracking, we developed a new system for performing fully automated analysis of fluorescent foci in single cells. The system was validated by applying it to two common tasks: intracellular foci counting (in DNA damage repair experiments) and cell-phase identification based on foci pattern analysis (in DNA replication experiments). Experimental results show that the system performs comparably to expert human observers. Thus, it may replace tedious manual analyses for the considered tasks, and enables high-content screening. The described system was implemented in MATLAB (The MathWorks, Inc., USA) and compiled to run within the MATLAB environment. The routines together with four sample datasets are available at http://celmia.bigr.nl/. The software is planned for public release, free of charge for non-commercial use, after publication of this article.

  10. NiftyNet: a deep-learning platform for medical imaging.

    PubMed

    Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom

    2018-05-01

    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  12. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Statistical model of laminar structure for atlas-based segmentation of the fetal brain from in utero MR images

    NASA Astrophysics Data System (ADS)

    Habas, Piotr A.; Kim, Kio; Chandramohan, Dharshan; Rousseau, Francois; Glenn, Orit A.; Studholme, Colin

    2009-02-01

    Recent advances in MR and image analysis allow for reconstruction of high-resolution 3D images from clinical in utero scans of the human fetal brain. Automated segmentation of tissue types from MR images (MRI) is a key step in the quantitative analysis of brain development. Conventional atlas-based methods for adult brain segmentation are limited in their ability to accurately delineate complex structures of developing tissues from fetal MRI. In this paper, we formulate a novel geometric representation of the fetal brain aimed at capturing the laminar structure of developing anatomy. The proposed model uses a depth-based encoding of tissue occurrence within the fetal brain and provides an additional anatomical constraint in a form of a laminar prior that can be incorporated into conventional atlas-based EM segmentation. Validation experiments are performed using clinical in utero scans of 5 fetal subjects at gestational ages ranging from 20.5 to 22.5 weeks. Experimental results are evaluated against reference manual segmentations and quantified in terms of Dice similarity coefficient (DSC). The study demonstrates that the use of laminar depth-encoded tissue priors improves both the overall accuracy and precision of fetal brain segmentation. Particular refinement is observed in regions of the parietal and occipital lobes where the DSC index is improved from 0.81 to 0.82 for cortical grey matter, from 0.71 to 0.73 for the germinal matrix, and from 0.81 to 0.87 for white matter.

  14. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  15. Fast and robust segmentation of white blood cell images by self-supervised learning.

    PubMed

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo

    2018-04-01

    A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Pomegranate MR images analysis using ACM and FCM algorithms

    NASA Astrophysics Data System (ADS)

    Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.

  17. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    NASA Astrophysics Data System (ADS)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  18. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Quantitative mouse brain phenotyping based on single and multispectral MR protocols

    PubMed Central

    Badea, Alexandra; Gewalt, Sally; Avants, Brian B.; Cook, James J.; Johnson, G. Allan

    2013-01-01

    Sophisticated image analysis methods have been developed for the human brain, but such tools still need to be adapted and optimized for quantitative small animal imaging. We propose a framework for quantitative anatomical phenotyping in mouse models of neurological and psychiatric conditions. The framework encompasses an atlas space, image acquisition protocols, and software tools to register images into this space. We show that a suite of segmentation tools (Avants, Epstein et al., 2008) designed for human neuroimaging can be incorporated into a pipeline for segmenting mouse brain images acquired with multispectral magnetic resonance imaging (MR) protocols. We present a flexible approach for segmenting such hyperimages, optimizing registration, and identifying optimal combinations of image channels for particular structures. Brain imaging with T1, T2* and T2 contrasts yielded accuracy in the range of 83% for hippocampus and caudate putamen (Hc and CPu), but only 54% in white matter tracts, and 44% for the ventricles. The addition of diffusion tensor parameter images improved accuracy for large gray matter structures (by >5%), white matter (10%), and ventricles (15%). The use of Markov random field segmentation further improved overall accuracy in the C57BL/6 strain by 6%; so Dice coefficients for Hc and CPu reached 93%, for white matter 79%, for ventricles 68%, and for substantia nigra 80%. We demonstrate the segmentation pipeline for the widely used C57BL/6 strain, and two test strains (BXD29, APP/TTA). This approach appears promising for characterizing temporal changes in mouse models of human neurological and psychiatric conditions, and may provide anatomical constraints for other preclinical imaging, e.g. fMRI and molecular imaging. This is the first demonstration that multiple MR imaging modalities combined with multivariate segmentation methods lead to significant improvements in anatomical segmentation in the mouse brain. PMID:22836174

  20. Segmentation of touching mycobacterium tuberculosis from Ziehl-Neelsen stained sputum smear images

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Liu, Yunhui

    2015-12-01

    Touching Mycobacterium tuberculosis objects in the Ziehl-Neelsen stained sputum smear images present different shapes and invisible boundaries in the adhesion areas, which increases the difficulty in objects recognition and counting. In this paper, we present a segmentation method of combining the hierarchy tree analysis with gradient vector flow snake to address this problem. The skeletons of the objects are used for structure analysis based on the hierarchy tree. The gradient vector flow snake is used to estimate the object edge. Experimental results show that the single objects composing the touching objects are successfully segmented by the proposed method. This work will improve the accuracy and practicability of the computer-aided diagnosis of tuberculosis.

  1. Image analysis for the automated estimation of clonal growth and its application to the growth of smooth muscle cells.

    PubMed

    Gavino, V C; Milo, G E; Cornwell, D G

    1982-03-01

    Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.

  2. SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.

    PubMed

    Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A

    2016-11-01

    Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.

  3. Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure.

    PubMed

    Cunningham, Ryan J; Harding, Peter J; Loram, Ian D

    2017-02-01

    Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.

  4. Emergence of Convolutional Neural Network in Future Medicine: Why and How. A Review on Brain Tumor Segmentation

    NASA Astrophysics Data System (ADS)

    Alizadeh Savareh, Behrouz; Emami, Hassan; Hajiabadi, Mohamadreza; Ghafoori, Mahyar; Majid Azimi, Seyed

    2018-03-01

    Manual analysis of brain tumors magnetic resonance images is usually accompanied by some problem. Several techniques have been proposed for the brain tumor segmentation. This study will be focused on searching popular databases for related studies, theoretical and practical aspects of Convolutional Neural Network surveyed in brain tumor segmentation. Based on our findings, details about related studies including the datasets used, evaluation parameters, preferred architectures and complementary steps analyzed. Deep learning as a revolutionary idea in image processing, achieved brilliant results in brain tumor segmentation too. This can be continuing until the next revolutionary idea emerging.

  5. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    PubMed Central

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  6. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  7. Exploiting spectral content for image segmentation in GPR data

    NASA Astrophysics Data System (ADS)

    Wang, Patrick K.; Morton, Kenneth D., Jr.; Collins, Leslie M.; Torrione, Peter A.

    2011-06-01

    Ground-penetrating radar (GPR) sensors provide an effective means for detecting changes in the sub-surface electrical properties of soils, such as changes indicative of landmines or other buried threats. However, most GPR-based pre-screening algorithms only localize target responses along the surface of the earth, and do not provide information regarding an object's position in depth. As a result, feature extraction algorithms are forced to process data from entire cubes of data around pre-screener alarms, which can reduce feature fidelity and hamper performance. In this work, spectral analysis is investigated as a method for locating subsurface anomalies in GPR data. In particular, a 2-D spatial/frequency decomposition is applied to pre-screener flagged GPR B-scans. Analysis of these spatial/frequency regions suggests that aspects (e.g. moments, maxima, mode) of the frequency distribution of GPR energy can be indicative of the presence of target responses. After translating a GPR image to a function of the spatial/frequency distributions at each pixel, several image segmentation approaches can be applied to perform segmentation in this new transformed feature space. To illustrate the efficacy of the approach, a performance comparison between feature processing with and without the image segmentation algorithm is provided.

  8. Automatic segmentation of the left ventricle in a cardiac MR short axis image using blind morphological operation

    NASA Astrophysics Data System (ADS)

    Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat

    2018-04-01

    Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.

  9. Three-dimensional segmentation of the tumor mass in computed tomographic images of neuroblastoma

    NASA Astrophysics Data System (ADS)

    Deglint, Hanford J.; Rangayyan, Rangaraj M.; Boag, Graham S.

    2004-05-01

    Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, fibrosis, and normal tissue are often intermixed. Rather than attempt to separate these tissue types into distinct regions, we propose to explore methods to delineate the normal structures expected in abdominal CT images, remove them from further consideration, and examine the remaining parts of the images for the tumor mass. We explore the use of fuzzy connectivity for this purpose. Expert knowledge provided by the radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are also incorporated in the segmentation algorithm. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to chemotherapy and in the planning of delayed surgery for resection of the tumor. The performance of the algorithm is evaluated using cases acquired from the Alberta Children's Hospital.

  10. Background fluorescence estimation and vesicle segmentation in live cell imaging with conditional random fields.

    PubMed

    Pécot, Thierry; Bouthemy, Patrick; Boulanger, Jérôme; Chessel, Anatole; Bardin, Sabine; Salamero, Jean; Kervrann, Charles

    2015-02-01

    Image analysis applied to fluorescence live cell microscopy has become a key tool in molecular biology since it enables to characterize biological processes in space and time at the subcellular level. In fluorescence microscopy imaging, the moving tagged structures of interest, such as vesicles, appear as bright spots over a static or nonstatic background. In this paper, we consider the problem of vesicle segmentation and time-varying background estimation at the cellular scale. The main idea is to formulate the joint segmentation-estimation problem in the general conditional random field framework. Furthermore, segmentation of vesicles and background estimation are alternatively performed by energy minimization using a min cut-max flow algorithm. The proposed approach relies on a detection measure computed from intensity contrasts between neighboring blocks in fluorescence microscopy images. This approach permits analysis of either 2D + time or 3D + time data. We demonstrate the performance of the so-called C-CRAFT through an experimental comparison with the state-of-the-art methods in fluorescence video-microscopy. We also use this method to characterize the spatial and temporal distribution of Rab6 transport carriers at the cell periphery for two different specific adhesion geometries.

  11. Segment fusion of ToF-SIMS images.

    PubMed

    Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A

    2016-06-08

    The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.

  12. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images.

    PubMed

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L

    2015-11-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.

  13. Spectral imaging toolbox: segmentation, hyperstack reconstruction, and batch processing of spectral images for the determination of cell and model membrane lipid order.

    PubMed

    Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor

    2017-05-12

    Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The Spectral Imaging Toolbox can be downloaded from https://uk.mathworks.com/matlabcentral/fileexchange/62617-spectral-imaging-toolbox .

  14. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  15. Saliency detection algorithm based on LSC-RC

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu

    2018-02-01

    Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.

  16. MIMoSA: An Automated Method for Intermodal Segmentation Analysis of Multiple Sclerosis Brain Lesions.

    PubMed

    Valcarcel, Alessandra M; Linn, Kristin A; Vandekar, Simon N; Satterthwaite, Theodore D; Muschelli, John; Calabresi, Peter A; Pham, Dzung L; Martin, Melissa Lynne; Shinohara, Russell T

    2018-03-08

    Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WMLs) in multiple sclerosis. While WMLs have been studied for over two decades using MRI, automated segmentation remains challenging. Although the majority of statistical techniques for the automated segmentation of WMLs are based on single imaging modalities, recent advances have used multimodal techniques for identifying WMLs. Complementary modalities emphasize different tissue properties, which help identify interrelated features of lesions. Method for Inter-Modal Segmentation Analysis (MIMoSA), a fully automatic lesion segmentation algorithm that utilizes novel covariance features from intermodal coupling regression in addition to mean structure to model the probability lesion is contained in each voxel, is proposed. MIMoSA was validated by comparison with both expert manual and other automated segmentation methods in two datasets. The first included 98 subjects imaged at Johns Hopkins Hospital in which bootstrap cross-validation was used to compare the performance of MIMoSA against OASIS and LesionTOADS, two popular automatic segmentation approaches. For a secondary validation, a publicly available data from a segmentation challenge were used for performance benchmarking. In the Johns Hopkins study, MIMoSA yielded average Sørensen-Dice coefficient (DSC) of .57 and partial AUC of .68 calculated with false positive rates up to 1%. This was superior to performance using OASIS and LesionTOADS. The proposed method also performed competitively in the segmentation challenge dataset. MIMoSA resulted in statistically significant improvements in lesion segmentation performance compared with LesionTOADS and OASIS, and performed competitively in an additional validation study. Copyright © 2018 by the American Society of Neuroimaging.

  17. Breast tumor segmentation in DCE-MRI using fully convolutional networks with an application in radiogenomics

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Saha, Ashirbani; Zhu, Zhe; Mazurowski, Maciej A.

    2018-02-01

    Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer.

  18. Simultaneous 3D segmentation of three bone compartments on high resolution knee MR images from osteoarthritis initiative (OAI) using graph cuts

    NASA Astrophysics Data System (ADS)

    Shim, Hackjoon; Kwoh, C. Kent; Yun, Il Dong; Lee, Sang Uk; Bae, Kyongtae

    2009-02-01

    Osteoarthritis (OA) is associated with degradation of cartilage and related changes in the underlying bone. Quantitative measurement of those changes from MR images is an important biomarker to study the progression of OA and it requires a reliable segmentation of knee bone and cartilage. As the most popular method, manual segmentation of knee joint structures by boundary delineation is highly laborious and subject to user-variation. To overcome these difficulties, we have developed a semi-automated method for segmentation of knee bones, which consisted of two steps: placement of seeds and computation of segmentation. In the first step, seeds were placed by the user on a number of slices and then were propagated automatically to neighboring images. The seed placement could be performed on any of sagittal, coronal, and axial planes. The second step, computation of segmentation, was based on a graph-cuts algorithm where the optimal segmentation is the one that minimizes a cost function, which integrated the seeds specified by the user and both the regional and boundary properties of the regions to be segmented. The algorithm also allows simultaneous segmentation of three compartments of the knee bone (femur, tibia, patella). Our method was tested on the knee MR images of six subjects from the osteoarthritis initiative (OAI). The segmentation processing time (mean+/-SD) was (22+/-4)min, which is much shorter than that by the manual boundary delineation method (typically several hours). With this improved efficiency, our segmentation method will facilitate the quantitative morphologic analysis of changes in knee bones associated with osteoarthritis.

  19. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  20. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm.

    PubMed

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell's image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method.

  1. A three-dimensional image processing program for accurate, rapid, and semi-automated segmentation of neuronal somata with dense neurite outgrowth

    PubMed Central

    Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.

    2015-01-01

    Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609

  2. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.

  3. Optic disc segmentation: level set methods and blood vessels inpainting

    NASA Astrophysics Data System (ADS)

    Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-03-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.

  4. A Complete System for Automatic Extraction of Left Ventricular Myocardium From CT Images Using Shape Segmentation and Contour Evolution

    PubMed Central

    Zhu, Liangjia; Gao, Yi; Appia, Vikram; Yezzi, Anthony; Arepalli, Chesnal; Faber, Tracy; Stillman, Arthur; Tannenbaum, Allen

    2014-01-01

    The left ventricular myocardium plays a key role in the entire circulation system and an automatic delineation of the myocardium is a prerequisite for most of the subsequent functional analysis. In this paper, we present a complete system for an automatic segmentation of the left ventricular myocardium from cardiac computed tomography (CT) images using the shape information from images to be segmented. The system follows a coarse-to-fine strategy by first localizing the left ventricle and then deforming the myocardial surfaces of the left ventricle to refine the segmentation. In particular, the blood pool of a CT image is extracted and represented as a triangulated surface. Then, the left ventricle is localized as a salient component on this surface using geometric and anatomical characteristics. After that, the myocardial surfaces are initialized from the localization result and evolved by applying forces from the image intensities with a constraint based on the initial myocardial surface locations. The proposed framework has been validated on 34-human and 12-pig CT images, and the robustness and accuracy are demonstrated. PMID:24723531

  5. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    PubMed

    Memari, Nogol; Ramli, Abd Rahman; Bin Saripan, M Iqbal; Mashohor, Syamsiah; Moghbel, Mehrdad

    2017-01-01

    The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE) method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of the Retina (STARE) and Child Heart and Health Study in England (CHASE_DB1) datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  6. A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features

    NASA Astrophysics Data System (ADS)

    Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron

    2005-04-01

    Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.

  7. Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.

    PubMed

    Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C

    2013-06-01

    A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.

  8. Learning a cost function for microscope image segmentation.

    PubMed

    Nilufar, Sharmin; Perkins, Theodore J

    2014-01-01

    Quantitative analysis of microscopy images is increasingly important in clinical researchers' efforts to unravel the cellular and molecular determinants of disease, and for pathological analysis of tissue samples. Yet, manual segmentation and measurement of cells or other features in images remains the norm in many fields. We report on a new system that aims for robust and accurate semi-automated analysis of microscope images. A user interactively outlines one or more examples of a target object in a training image. We then learn a cost function for detecting more objects of the same type, either in the same or different images. The cost function is incorporated into an active contour model, which can efficiently determine optimal boundaries by dynamic programming. We validate our approach and compare it to some standard alternatives on three different types of microscopic images: light microscopy of blood cells, light microscopy of muscle tissue sections, and electron microscopy cross-sections of axons and their myelin sheaths.

  9. Pressure ulcer image segmentation technique through synthetic frequencies generation and contrast variation using toroidal geometry.

    PubMed

    David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García

    2017-01-06

    Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.

  10. Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate

    DTIC Science & Technology

    2010-05-01

    Breast cancer, cell signaling, cell proliferation, histology, image analysis 15. NUMBER OF PAGES - 51 16. PRICE CODE 17. SECURITY CLASSIFICATION...revealed by individual stains in multiplex combinations; and (3) software (FARSIGHT) for automated multispectral image analysis that (i) segments...Task 3. Develop computational algorithms for multispectral immunohistological image analysis FARSIGHT software was developed to quantify intrinsic

  11. Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-10-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Improved Inference in Bayesian Segmentation Using Monte Carlo Sampling: Application to Hippocampal Subfield Volumetry

    PubMed Central

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van

    2013-01-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521

  13. Cell nuclei and cytoplasm joint segmentation using the sliding band filter.

    PubMed

    Quelhas, Pedro; Marcuzzo, Monica; Mendonça, Ana Maria; Campilho, Aurélio

    2010-08-01

    Microscopy cell image analysis is a fundamental tool for biological research. In particular, multivariate fluorescence microscopy is used to observe different aspects of cells in cultures. It is still common practice to perform analysis tasks by visual inspection of individual cells which is time consuming, exhausting and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cell cultures. Traditionally the task of automatic cell analysis is approached through the use of image segmentation methods for extraction of cells' locations and shapes. Image segmentation, although fundamental, is neither an easy task in computer vision nor is it robust to image quality changes. This makes image segmentation for cell detection semi-automated requiring frequent tuning of parameters. We introduce a new approach for cell detection and shape estimation in multivariate images based on the sliding band filter (SBF). This filter's design makes it adequate to detect overall convex shapes and as such it performs well for cell detection. Furthermore, the parameters involved are intuitive as they are directly related to the expected cell size. Using the SBF filter we detect cells' nucleus and cytoplasm location and shapes. Based on the assumption that each cell has the same approximate shape center in both nuclei and cytoplasm fluorescence channels, we guide cytoplasm shape estimation by the nuclear detections improving performance and reducing errors. Then we validate cell detection by gathering evidence from nuclei and cytoplasm channels. Additionally, we include overlap correction and shape regularization steps which further improve the estimated cell shapes. The approach is evaluated using two datasets with different types of data: a 20 images benchmark set of simulated cell culture images, containing 1000 simulated cells; a 16 images Drosophila melanogaster Kc167 dataset containing 1255 cells, stained for DNA and actin. Both image datasets present a difficult problem due to the high variability of cell shapes and frequent cluster overlap between cells. On the Drosophila dataset our approach achieved a precision/recall of 95%/69% and 82%/90% for nuclei and cytoplasm detection respectively and an overall accuracy of 76%.

  14. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  15. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  16. Vessel segmentation in 3D spectral OCT scans of the retina

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; van Ginneken, Bram; Sonka, Milan; Abràmoff, Michael D.

    2008-03-01

    The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.

  17. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  18. Segmentation of the Aortic Valve Apparatus in 3D Echocardiographic Images: Deformable Modeling of a Branching Medial Structure

    PubMed Central

    Pouch, Alison M.; Tian, Sijie; Takabe, Manabu; Wang, Hongzhi; Yuan, Jiefu; Cheung, Albert T.; Jackson, Benjamin M.; Gorman, Joseph H.; Gorman, Robert C.; Yushkevich, Paul A.

    2015-01-01

    3D echocardiographic (3DE) imaging is a useful tool for assessing the complex geometry of the aortic valve apparatus. Segmentation of this structure in 3DE images is a challenging task that benefits from shape-guided deformable modeling methods, which enable inter-subject statistical shape comparison. Prior work demonstrates the efficacy of using continuous medial representation (cm-rep) as a shape descriptor for valve leaflets. However, its application to the entire aortic valve apparatus is limited since the structure has a branching medial geometry that cannot be explicitly parameterized in the original cm-rep framework. In this work, we show that the aortic valve apparatus can be accurately segmented using a new branching medial modeling paradigm. The segmentation method achieves a mean boundary displacement of 0.6 ± 0.1 mm (approximately one voxel) relative to manual segmentation on 11 3DE images of normal open aortic valves. This study demonstrates a promising approach for quantitative 3DE analysis of aortic valve morphology. PMID:26247062

  19. New breast cancer prognostic factors identified by computer-aided image analysis of HE stained histopathology images

    PubMed Central

    Chen, Jia-Mei; Qu, Ai-Ping; Wang, Lin-Wei; Yuan, Jing-Ping; Yang, Fang; Xiang, Qing-Ming; Maskey, Ninu; Yang, Gui-Fang; Liu, Juan; Li, Yan

    2015-01-01

    Computer-aided image analysis (CAI) can help objectively quantify morphologic features of hematoxylin-eosin (HE) histopathology images and provide potentially useful prognostic information on breast cancer. We performed a CAI workflow on 1,150 HE images from 230 patients with invasive ductal carcinoma (IDC) of the breast. We used a pixel-wise support vector machine classifier for tumor nests (TNs)-stroma segmentation, and a marker-controlled watershed algorithm for nuclei segmentation. 730 morphologic parameters were extracted after segmentation, and 12 parameters identified by Kaplan-Meier analysis were significantly associated with 8-year disease free survival (P < 0.05 for all). Moreover, four image features including TNs feature (HR 1.327, 95%CI [1.001 - 1.759], P = 0.049), TNs cell nuclei feature (HR 0.729, 95%CI [0.537 - 0.989], P = 0.042), TNs cell density (HR 1.625, 95%CI [1.177 - 2.244], P = 0.003), and stromal cell structure feature (HR 1.596, 95%CI [1.142 - 2.229], P = 0.006) were identified by multivariate Cox proportional hazards model to be new independent prognostic factors. The results indicated that CAI can assist the pathologist in extracting prognostic information from HE histopathology images for IDC. The TNs feature, TNs cell nuclei feature, TNs cell density, and stromal cell structure feature could be new prognostic factors. PMID:26022540

  20. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    PubMed

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  1. Automatic selection of localized region-based active contour models using image content analysis applied to brain tumor segmentation.

    PubMed

    Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire

    2017-12-01

    Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Magnetic Resonance–Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Weili; Kim, Joshua P.; Kadbi, Mo

    2015-11-01

    Purpose: To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Methods and Materials: Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessedmore » by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. Results: On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone–air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. Conclusions: A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated into our synCT pipeline for brain, and results agreed well with clinical CTs, thereby supporting MR-only radiation therapy treatment planning in the brain.« less

  3. Magnetic Resonance-Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region.

    PubMed

    Zheng, Weili; Kim, Joshua P; Kadbi, Mo; Movsas, Benjamin; Chetty, Indrin J; Glide-Hurst, Carri K

    2015-11-01

    To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone-air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated into our synCT pipeline for brain, and results agreed well with clinical CTs, thereby supporting MR-only radiation therapy treatment planning in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Atlas-Guided Segmentation of Vervet Monkey Brain MRI

    PubMed Central

    Fedorov, Andriy; Li, Xiaoxing; Pohl, Kilian M; Bouix, Sylvain; Styner, Martin; Addicott, Merideth; Wyatt, Chris; Daunais, James B; Wells, William M; Kikinis, Ron

    2011-01-01

    The vervet monkey is an important nonhuman primate model that allows the study of isolated environmental factors in a controlled environment. Analysis of monkey MRI often suffers from lower quality images compared with human MRI because clinical equipment is typically used to image the smaller monkey brain and higher spatial resolution is required. This, together with the anatomical differences of the monkey brains, complicates the use of neuroimage analysis pipelines tuned for human MRI analysis. In this paper we developed an open source image analysis framework based on the tools available within the 3D Slicer software to support a biological study that investigates the effect of chronic ethanol exposure on brain morphometry in a longitudinally followed population of male vervets. We first developed a computerized atlas of vervet monkey brain MRI, which was used to encode the typical appearance of the individual brain structures in MRI and their spatial distribution. The atlas was then used as a spatial prior during automatic segmentation to process two longitudinal scans per subject. Our evaluation confirms the consistency and reliability of the automatic segmentation. The comparison of atlas construction strategies reveals that the use of a population-specific atlas leads to improved accuracy of the segmentation for subcortical brain structures. The contribution of this work is twofold. First, we describe an image processing workflow specifically tuned towards the analysis of vervet MRI that consists solely of the open source software tools. Second, we develop a digital atlas of vervet monkey brain MRIs to enable similar studies that rely on the vervet model. PMID:22253661

  5. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  6. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  7. Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index1

    PubMed Central

    Zou, Kelly H.; Warfield, Simon K.; Bharatha, Aditya; Tempany, Clare M.C.; Kaus, Michael R.; Haker, Steven J.; Wells, William M.; Jolesz, Ferenc A.; Kikinis, Ron

    2005-01-01

    Rationale and Objectives To examine a statistical validation method based on the spatial overlap between two sets of segmentations of the same anatomy. Materials and Methods The Dice similarity coefficient (DSC) was used as a statistical validation metric to evaluate the performance of both the reproducibility of manual segmentations and the spatial overlap accuracy of automated probabilistic fractional segmentation of MR images, illustrated on two clinical examples. Example 1: 10 consecutive cases of prostate brachytherapy patients underwent both preoperative 1.5T and intraoperative 0.5T MR imaging. For each case, 5 repeated manual segmentations of the prostate peripheral zone were performed separately on preoperative and on intraoperative images. Example 2: A semi-automated probabilistic fractional segmentation algorithm was applied to MR imaging of 9 cases with 3 types of brain tumors. DSC values were computed and logit-transformed values were compared in the mean with the analysis of variance (ANOVA). Results Example 1: The mean DSCs of 0.883 (range, 0.876–0.893) with 1.5T preoperative MRI and 0.838 (range, 0.819–0.852) with 0.5T intraoperative MRI (P < .001) were within and at the margin of the range of good reproducibility, respectively. Example 2: Wide ranges of DSC were observed in brain tumor segmentations: Meningiomas (0.519–0.893), astrocytomas (0.487–0.972), and other mixed gliomas (0.490–0.899). Conclusion The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation. We observed generally satisfactory but variable validation results in two clinical applications. This metric may be adapted for similar validation tasks. PMID:14974593

  8. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation.

    PubMed

    Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James

    2016-01-01

    Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.

  9. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    PubMed

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  10. Use of 2D U-Net Convolutional Neural Networks for Automated Cartilage and Meniscus Segmentation of Knee MR Imaging Data to Determine Relaxometry and Morphometry.

    PubMed

    Norman, Berk; Pedoia, Valentina; Majumdar, Sharmila

    2018-03-27

    Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1 ρ -weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. © RSNA, 2018 Online supplemental material is available for this article.

  11. Uterus segmentation in dynamic MRI using LBP texture descriptors

    NASA Astrophysics Data System (ADS)

    Namias, R.; Bellemare, M.-E.; Rahim, M.; Pirró, N.

    2014-03-01

    Pelvic floor disorders cover pathologies of which physiopathology is not well understood. However cases get prevalent with an ageing population. Within the context of a project aiming at modelization of the dynamics of pelvic organs, we have developed an efficient segmentation process. It aims at alleviating the radiologist with a tedious one by one image analysis. From a first contour delineating the uterus-vagina set, the organ border is tracked along a dynamic mri sequence. The process combines movement prediction, local intensity and texture analysis and active contour geometry control. Movement prediction allows a contour intitialization for next image in the sequence. Intensity analysis provides image-based local contour detection enhanced by local binary pattern (lbp) texture descriptors. Geometry control prohibits self intersections and smoothes the contour. Results show the efficiency of the method with images produced in clinical routine.

  12. An automated approach to the segmentation of HEp-2 cells for the indirect immunofluorescence ANA test.

    PubMed

    Tonti, Simone; Di Cataldo, Santa; Bottino, Andrea; Ficarra, Elisa

    2015-03-01

    The automatization of the analysis of Indirect Immunofluorescence (IIF) images is of paramount importance for the diagnosis of autoimmune diseases. This paper proposes a solution to one of the most challenging steps of this process, the segmentation of HEp-2 cells, through an adaptive marker-controlled watershed approach. Our algorithm automatically conforms the marker selection pipeline to the peculiar characteristics of the input image, hence it is able to cope with different fluorescent intensities and staining patterns without any a priori knowledge. Furthermore, it shows a reduced sensitivity to over-segmentation errors and uneven illumination, that are typical issues of IIF imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A medical software system for volumetric analysis of cerebral pathologies in magnetic resonance imaging (MRI) data.

    PubMed

    Egger, Jan; Kappus, Christoph; Freisleben, Bernd; Nimsky, Christopher

    2012-08-01

    In this contribution, a medical software system for volumetric analysis of different cerebral pathologies in magnetic resonance imaging (MRI) data is presented. The software system is based on a semi-automatic segmentation algorithm and helps to overcome the time-consuming process of volume determination during monitoring of a patient. After imaging, the parameter settings-including a seed point-are set up in the system and an automatic segmentation is performed by a novel graph-based approach. Manually reviewing the result leads to reseeding, adding seed points or an automatic surface mesh generation. The mesh is saved for monitoring the patient and for comparisons with follow-up scans. Based on the mesh, the system performs a voxelization and volume calculation, which leads to diagnosis and therefore further treatment decisions. The overall system has been tested with different cerebral pathologies-glioblastoma multiforme, pituitary adenomas and cerebral aneurysms- and evaluated against manual expert segmentations using the Dice Similarity Coefficient (DSC). Additionally, intra-physician segmentations have been performed to provide a quality measure for the presented system.

  14. Subcellular object quantification with Squassh3C and SquasshAnalyst.

    PubMed

    Rizk, Aurélien; Mansouri, Maysam; Ballmer-Hofer, Kurt; Berger, Philipp

    2015-11-01

    Quantitative image analysis plays an important role in contemporary biomedical research. Squassh is a method for automatic detection, segmentation, and quantification of subcellular structures and analysis of their colocalization. Here we present the applications Squassh3C and SquasshAnalyst. Squassh3C extends the functionality of Squassh to three fluorescence channels and live-cell movie analysis. SquasshAnalyst is an interactive web interface for the analysis of Squassh3C object data. It provides segmentation image overview and data exploration, figure generation, object and image filtering, and a statistical significance test in an easy-to-use interface. The overall procedure combines the Squassh3C plug-in for the free biological image processing program ImageJ and a web application working in conjunction with the free statistical environment R, and it is compatible with Linux, MacOS X, or Microsoft Windows. Squassh3C and SquasshAnalyst are available for download at www.psi.ch/lbr/SquasshAnalystEN/SquasshAnalyst.zip.

  15. Feature space analysis of MRI

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Windham, Joe P.; Peck, Donald J.

    1997-04-01

    This paper presents development and performance evaluation of an MRI feature space method. The method is useful for: identification of tissue types; segmentation of tissues; and quantitative measurements on tissues, to obtain information that can be used in decision making (diagnosis, treatment planning, and evaluation of treatment). The steps of the work accomplished are as follows: (1) Four T2-weighted and two T1-weighted images (before and after injection of Gadolinium) were acquired for ten tumor patients. (2) Images were analyed by two image analysts according to the following algorithm. The intracranial brain tissues were segmented from the scalp and background. The additive noise was suppressed using a multi-dimensional non-linear edge- preserving filter which preserves partial volume information on average. Image nonuniformities were corrected using a modified lowpass filtering approach. The resulting images were used to generate and visualize an optimal feature space. Cluster centers were identified on the feature space. Then images were segmented into normal tissues and different zones of the tumor. (3) Biopsy samples were extracted from each patient and were subsequently analyzed by the pathology laboratory. (4) Image analysis results were compared to each other and to the biopsy results. Pre- and post-surgery feature spaces were also compared. The proposed algorithm made it possible to visualize the MRI feature space and to segment the image. In all cases, the operators were able to find clusters for normal and abnormal tissues. Also, clusters for different zones of the tumor were found. Based on the clusters marked for each zone, the method successfully segmented the image into normal tissues (white matter, gray matter, and CSF) and different zones of the lesion (tumor, cyst, edema, radiation necrosis, necrotic core, and infiltrated tumor). The results agreed with those obtained from the biopsy samples. Comparison of pre- to post-surgery and radiation feature spaces confirmed that the tumor was not present in the second study but radiation necrosis was generated as a result of radiation.

  16. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  17. [Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.

    PubMed

    Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning

    2016-05-01

    Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.

  18. Attenuation correction with region growing method used in the positron emission mammography imaging system

    NASA Astrophysics Data System (ADS)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  19. Individual bone structure segmentation and labeling from low-dose chest CT

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P.

    2017-03-01

    The segmentation and labeling of the individual bones serve as the first step to the fully automated measurement of skeletal characteristics and the detection of abnormalities such as skeletal deformities, osteoporosis, and vertebral fractures. Moreover, the identified landmarks on the segmented bone structures can potentially provide relatively reliable location reference to other non-rigid human organs, such as breast, heart and lung, thereby facilitating the corresponding image analysis and registration. A fully automated anatomy-directed framework for the segmentation and labeling of the individual bone structures from low-dose chest CT is presented in this paper. The proposed system consists of four main stages: First, both clavicles are segmented and labeled by fitting a piecewise cylindrical envelope. Second, the sternum is segmented under the spatial constraints provided by the segmented clavicles. Third, all ribs are segmented and labeled based on 3D region growing within the volume of interest defined with reference to the spinal canal centerline and lungs. Fourth, the individual thoracic vertebrae are segmented and labeled by image intensity based analysis in the spatial region constrained by the previously segmented bone structures. The system performance was validated with 1270 lowdose chest CT scans through visual evaluation. Satisfactory performance was obtained respectively in 97.1% cases for the clavicle segmentation and labeling, in 97.3% cases for the sternum segmentation, in 97.2% cases for the rib segmentation, in 94.2% cases for the rib labeling, in 92.4% cases for vertebra segmentation and in 89.9% cases for the vertebra labeling.

  20. Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest

    NASA Astrophysics Data System (ADS)

    Feng, W.; Sui, H.; Chen, X.

    2018-04-01

    Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  1. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems. PMID:24964954

  2. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.

    PubMed

    Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk

    2014-06-25

    Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.

  3. Fully automatic segmentation of white matter hyperintensities in MR images of the elderly.

    PubMed

    Admiraal-Behloul, F; van den Heuvel, D M J; Olofsen, H; van Osch, M J P; van der Grond, J; van Buchem, M A; Reiber, J H C

    2005-11-15

    The role of quantitative image analysis in large clinical trials is continuously increasing. Several methods are available for performing white matter hyperintensity (WMH) volume quantification. They vary in the amount of the human interaction involved. In this paper, we describe a fully automatic segmentation that was used to quantify WMHs in a large clinical trial on elderly subjects. Our segmentation method combines information from 3 different MR images: proton density (PD), T2-weighted and fluid-attenuated inversion recovery (FLAIR) images; our method uses an established artificial intelligent technique (fuzzy inference system) and does not require extensive computations. The reproducibility of the segmentation was evaluated in 9 patients who underwent scan-rescan with repositioning; an inter-class correlation coefficient (ICC) of 0.91 was obtained. The effect of differences in image resolution was tested in 44 patients, scanned with 6- and 3-mm slice thickness FLAIR images; we obtained an ICC value of 0.99. The accuracy of the segmentation was evaluated on 100 patients for whom manual delineation of WMHs was available; the obtained ICC was 0.98 and the similarity index was 0.75. Besides the fact that the approach demonstrated very high volumetric and spatial agreement with expert delineation, the software did not require more than 2 min per patient (from loading the images to saving the results) on a Pentium-4 processor (512 MB RAM).

  4. Segmentation Approach Towards Phase-Contrast Microscopic Images of Activated Sludge to Monitor the Wastewater Treatment.

    PubMed

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Lai, Koon Chun

    2017-12-01

    Image processing and analysis is an effective tool for monitoring and fault diagnosis of activated sludge (AS) wastewater treatment plants. The AS image comprise of flocs (microbial aggregates) and filamentous bacteria. In this paper, nine different approaches are proposed for image segmentation of phase-contrast microscopic (PCM) images of AS samples. The proposed strategies are assessed for their effectiveness from the perspective of microscopic artifacts associated with PCM. The first approach uses an algorithm that is based on the idea that different color space representation of images other than red-green-blue may have better contrast. The second uses an edge detection approach. The third strategy, employs a clustering algorithm for the segmentation and the fourth applies local adaptive thresholding. The fifth technique is based on texture-based segmentation and the sixth uses watershed algorithm. The seventh adopts a split-and-merge approach. The eighth employs Kittler's thresholding. Finally, the ninth uses a top-hat and bottom-hat filtering-based technique. The approaches are assessed, and analyzed critically with reference to the artifacts of PCM. Gold approximations of ground truth images are prepared to assess the segmentations. Overall, the edge detection-based approach exhibits the best results in terms of accuracy, and the texture-based algorithm in terms of false negative ratio. The respective scenarios are explained for suitability of edge detection and texture-based algorithms.

  5. Brain MR image segmentation using NAMS in pseudo-color.

    PubMed

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  6. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  7. Automated segmentation and analysis of normal and osteoarthritic knee menisci from magnetic resonance images--data from the Osteoarthritis Initiative.

    PubMed

    Paproki, A; Engstrom, C; Chandra, S S; Neubert, A; Fripp, J; Crozier, S

    2014-09-01

    To validate an automatic scheme for the segmentation and quantitative analysis of the medial meniscus (MM) and lateral meniscus (LM) in magnetic resonance (MR) images of the knee. We analysed sagittal water-excited double-echo steady-state MR images of the knee from a subset of the Osteoarthritis Initiative (OAI) cohort. The MM and LM were automatically segmented in the MR images based on a deformable model approach. Quantitative parameters including volume, subluxation and tibial-coverage were automatically calculated for comparison (Wilcoxon tests) between knees with variable radiographic osteoarthritis (rOA), medial and lateral joint space narrowing (mJSN, lJSN) and pain. Automatic segmentations and estimated parameters were evaluated for accuracy using manual delineations of the menisci in 88 pathological knee MR examinations at baseline and 12 months time-points. The median (95% confidence-interval (CI)) Dice similarity index (DSI) (2 ∗|Auto ∩ Manual|/(|Auto|+|Manual|)∗ 100) between manual and automated segmentations for the MM and LM volumes were 78.3% (75.0-78.7), 83.9% (82.1-83.9) at baseline and 75.3% (72.8-76.9), 83.0% (81.6-83.5) at 12 months. Pearson coefficients between automatic and manual segmentation parameters ranged from r = 0.70 to r = 0.92. MM in rOA/mJSN knees had significantly greater subluxation and smaller tibial-coverage than no-rOA/no-mJSN knees. LM in rOA knees had significantly greater volumes and tibial-coverage than no-rOA knees. Our automated method successfully segmented the menisci in normal and osteoarthritic knee MR images and detected meaningful morphological differences with respect to rOA and joint space narrowing (JSN). Our approach will facilitate analyses of the menisci in prospective MR cohorts such as the OAI for investigations into pathophysiological changes occurring in early osteoarthritis (OA) development. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  8. Poly-Pattern Compressive Segmentation of ASTER Data for GIS

    NASA Technical Reports Server (NTRS)

    Myers, Wayne; Warner, Eric; Tutwiler, Richard

    2007-01-01

    Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.

  9. Surface-region context in optimal multi-object graph-based segmentation: robust delineation of pulmonary tumors.

    PubMed

    Song, Qi; Chen, Mingqing; Bai, Junjie; Sonka, Milan; Wu, Xiaodong

    2011-01-01

    Multi-object segmentation with mutual interaction is a challenging task in medical image analysis. We report a novel solution to a segmentation problem, in which target objects of arbitrary shape mutually interact with terrain-like surfaces, which widely exists in the medical imaging field. The approach incorporates context information used during simultaneous segmentation of multiple objects. The object-surface interaction information is encoded by adding weighted inter-graph arcs to our graph model. A globally optimal solution is achieved by solving a single maximum flow problem in a low-order polynomial time. The performance of the method was evaluated in robust delineation of lung tumors in megavoltage cone-beam CT images in comparison with an expert-defined independent standard. The evaluation showed that our method generated highly accurate tumor segmentations. Compared with the conventional graph-cut method, our new approach provided significantly better results (p < 0.001). The Dice coefficient obtained by the conventional graph-cut approach (0.76 +/- 0.10) was improved to 0.84 +/- 0.05 when employing our new method for pulmonary tumor segmentation.

  10. A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation

    PubMed Central

    Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2014-01-01

    The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638

  11. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.

    PubMed

    Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron

    2017-01-01

    Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  12. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    PubMed

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  13. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  14. Comparison of pre-processing techniques for fluorescence microscopy images of cells labeled for actin.

    PubMed

    Muralidhar, Gautam S; Channappayya, Sumohana S; Slater, John H; Blinka, Ellen M; Bovik, Alan C; Frey, Wolfgang; Markey, Mia K

    2008-11-06

    Automated analysis of fluorescence microscopy images of endothelial cells labeled for actin is important for quantifying changes in the actin cytoskeleton. The current manual approach is laborious and inefficient. The goal of our work is to develop automated image analysis methods, thereby increasing cell analysis throughput. In this study, we present preliminary results on comparing different algorithms for cell segmentation and image denoising.

  15. The border-to-border distribution method for analysis of cytoplasmic particles and organelles.

    PubMed

    Yacovone, Shalane K; Ornelles, David A; Lyles, Douglas S

    2016-02-01

    Comparing the distribution of cytoplasmic particles and organelles between different experimental conditions can be challenging due to the heterogeneous nature of cell morphologies. The border-to-border distribution method was created to enable the quantitative analysis of fluorescently labeled cytoplasmic particles and organelles of multiple cells from images obtained by confocal microscopy. The method consists of four steps: (1) imaging of fluorescently labeled cells, (2) division of the image of the cytoplasm into radial segments, (3) selection of segments of interest, and (4) population analysis of fluorescence intensities at the pixel level either as a function of distance along the selected radial segments or as a function of angle around an annulus. The method was validated using the well-characterized effect of brefeldin A (BFA) on the distribution of the vesicular stomatitis virus G protein, in which intensely labeled Golgi membranes are redistributed within the cytoplasm. Surprisingly, in untreated cells, the distribution of fluorescence in Golgi membrane-containing radial segments was similar to the distribution of fluorescence in other G protein-containing segments, indicating that the presence of Golgi membranes did not shift the distribution of G protein towards the nucleus compared to the distribution of G protein in other regions of the cell. Treatment with BFA caused only a slight shift in the distribution of the brightest G protein-containing segments which had a distribution similar to that in untreated cells. Instead, the major effect of BFA was to alter the annular distribution of G protein in the perinuclear region.

  16. Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review

    PubMed Central

    Xing, Fuyong; Yang, Lin

    2016-01-01

    Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to inter-observer variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literatures. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast (DIC), fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation. PMID:26742143

  17. En face spectral domain optical coherence tomography analysis of lamellar macular holes.

    PubMed

    Clamp, Michael F; Wilkes, Geoff; Leis, Laura S; McDonald, H Richard; Johnson, Robert N; Jumper, J Michael; Fu, Arthur D; Cunningham, Emmett T; Stewart, Paul J; Haug, Sara J; Lujan, Brandon J

    2014-07-01

    To analyze the anatomical characteristics of lamellar macular holes using cross-sectional and en face spectral domain optical coherence tomography. Forty-two lamellar macular holes were retrospectively identified for analysis. The location, cross-sectional length, and area of lamellar holes were measured using B-scans and en face imaging. The presence of photoreceptor inner segment/outer segment disruption and the presence or absence of epiretinal membrane formation were recorded. Forty-two lamellar macular holes were identified. Intraretinal splitting occurred within the outer plexiform layer in 97.6% of eyes. The area of intraretinal splitting in lamellar holes did not correlate with visual acuity. Eyes with inner segment/outer segment disruption had significantly worse mean logMAR visual acuity (0.363 ± 0.169; Snellen = 20/46) than in eyes without inner segment/outer segment disruption (0.203 ± 0.124; Snellen = 20/32) (analysis of variance, P = 0.004). Epiretinal membrane was present in 34 of 42 eyes (81.0%). En face imaging allowed for consistent detection and quantification of intraretinal splitting within the outer plexiform layer in patients with lamellar macular holes, supporting the notion that an area of anatomical weakness exists within Henle's fiber layer, presumably at the synaptic connection of these fibers within the outer plexiform layer. However, the en face area of intraretinal splitting did not correlate with visual acuity, disruption of the inner segment/outer segment junction was associated with significantly worse visual acuity in patients with lamellar macular holes.

  18. Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.

    PubMed

    Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A

    2017-12-01

    Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.

  19. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  20. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  1. Segmentation of hepatic artery in multi-phase liver CT using directional dilation and connectivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian

    2016-03-01

    Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.

  2. Recent advances in quantitative analysis of fluid interfaces in multiphase fluid flow measured by synchrotron-based x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Schlueter, S.; Sheppard, A.; Wildenschild, D.

    2013-12-01

    Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.

  3. Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology

    NASA Astrophysics Data System (ADS)

    Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki

    2017-03-01

    Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.

  4. Repeatability of Non–Contrast-Enhanced Lower-Extremity Angiography Using the Flow-Spoiled Fresh Blood Imaging

    PubMed Central

    Zhang, Yuyang; Xing, Zhen; She, Dejun; Huang, Nan; Cao, Dairong

    2018-01-01

    Purpose The aim of this study was to prospectively evaluate the repeatability of non–contrast-enhanced lower-extremity magnetic resonance angiography using the flow-spoiled fresh blood imaging (FS-FBI). Methods Forty-three healthy volunteers and 15 patients with lower-extremity arterial stenosis were recruited in this study and were examined by FS-FBI. Digital subtraction angiography was performed within a week after the FS-FBI in the patient group. Repeatability was assessed by the following parameters: grading of image quality, diameter and area of major arteries, and grading of stenosis of lower-extremity arteries. Two experienced radiologists blinded for patient data independently evaluated the FS-FBI and digital subtraction angiography images. Intraclass correlation coefficients (ICCs), sensitivity, and specificity were used for statistical analysis. Results The grading of image quality of most data was satisfactory. The ICCs for the first and second measures were 0.792 and 0.884 in the femoral segment and 0.803 and 0.796 in the tibiofibular segment for healthy volunteer group, 0.873 and 1.000 in the femoral segment, and 0.737 and 0.737 in the tibiofibular segment for the patient group. Intraobserver and interobserver agreements on diameter and area of arteries were excellent, with ICCs mostly greater than 0.75 in the volunteer group. For stenosis grading analysis, intraobserver ICCs range from 0.784 to 0.862 and from 0.778 to 0.854, respectively. Flow-spoiled fresh blood imaging yielded a mean sensitivity and specificity to detect arterial stenosis or occlusion of 90% and 80% for femoral segment and 86.7% and 93.3% for tibiofibular segment at least. Conclusions Lower-extremity angiography with FS-FBI is a reliable and reproducible screening tool for lower-extremity atherosclerotic disease, especially for patients with impaired renal function. PMID:28787351

  5. Dual-modality brain PET-CT image segmentation based on adaptive use of functional and anatomical information.

    PubMed

    Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan

    2012-01-01

    Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Photovoltaic panel extraction from very high-resolution aerial imagery using region-line primitive association analysis and template matching

    NASA Astrophysics Data System (ADS)

    Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao

    2018-07-01

    In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.

  7. Strategies for the Segmentation of Subcutaneous Vascular Patterns in Thermographic Images

    NASA Astrophysics Data System (ADS)

    Chan, Eric K. Y.; Pearce, John A.

    1989-05-01

    Computer-assisted segmentation of vascular patterns in thermographic images provides the clinician with graphic outlines of thermally significant subcutaneous blood vessels. Segmentation strategies compared here consist of image smoothing protocols followed by thresholding and zero-crossing edge detectors. Median prefiltering followed by the Frei-Chen algorithm gave the most reproducible results, with an execution time of 143 seconds for 256 X 256 images. The Laplacian of Gaussian operator was not suitable due to streak artifacts in the thermographic imaging system. This computerized process may be adopted in a fast paced clinical environment to aid in the diagnosis and assessment of peripheral circulatory diseases, Raynaud's Disease3, phlebitis, varicose veins, as well as diseases of the autonomic nervous system. The same methodology may be applied to enhance the appearance of abnormal breast vascular patterns, and hence serve as an adjunct to mammography in the diagnosis of breast cancer. The automatically segmented vascular patterns, which have a hand drawn appearance, may also be used as a data reduction precursor to higher level pattern analysis and classification tasks.

  8. Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm

    NASA Astrophysics Data System (ADS)

    Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.

    2018-05-01

    A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.

  9. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  10. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    NASA Astrophysics Data System (ADS)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  11. An automated image analysis framework for segmentation and division plane detection of single live Staphylococcus aureus cells which can operate at millisecond sampling time scales using bespoke Slimfield microscopy

    NASA Astrophysics Data System (ADS)

    Wollman, Adam J. M.; Miller, Helen; Foster, Simon; Leake, Mark C.

    2016-10-01

    Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphologically complex structures of fluorescently labelled proteins present in clusters of other types of cells.

  12. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    PubMed

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  13. Texture analysis based on the Hermite transform for image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus

    2012-06-01

    Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.

  14. Right ventricular strain analysis from three-dimensional echocardiography by using temporally diffeomorphic motion estimation.

    PubMed

    Zhang, Zhijun; Zhu, Meihua; Ashraf, Muhammad; Broberg, Craig S; Sahn, David J; Song, Xubo

    2014-12-01

    Quantitative analysis of right ventricle (RV) motion is important for study of the mechanism of congenital and acquired diseases. Unlike left ventricle (LV), motion estimation of RV is more difficult because of its complex shape and thin myocardium. Although attempts of finite element models on MR images and speckle tracking on echocardiography have shown promising results on RV strain analysis, these methods can be improved since the temporal smoothness of the motion is not considered. The authors have proposed a temporally diffeomorphic motion estimation method in which a spatiotemporal transformation is estimated by optimization of a registration energy functional of the velocity field in their earlier work. The proposed motion estimation method is a fully automatic process for general image sequences. The authors apply the method by combining with a semiautomatic myocardium segmentation method to the RV strain analysis of three-dimensional (3D) echocardiographic sequences of five open-chest pigs under different steady states. The authors compare the peak two-point strains derived by their method with those estimated from the sonomicrometry, the results show that they have high correlation. The motion of the right ventricular free wall is studied by using segmental strains. The baseline sequence results show that the segmental strains in their methods are consistent with results obtained by other image modalities such as MRI. The image sequences of pacing steady states show that segments with the largest strain variation coincide with the pacing sites. The high correlation of the peak two-point strains of their method and sonomicrometry under different steady states demonstrates that their RV motion estimation has high accuracy. The closeness of the segmental strain of their method to those from MRI shows the feasibility of their method in the study of RV function by using 3D echocardiography. The strain analysis of the pacing steady states shows the potential utility of their method in study on RV diseases.

  15. Automatic multiscale enhancement and segmentation of pulmonary vessels in CT pulmonary angiography images for CAD applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Chuan; Chan, H.-P.; Sahiner, Berkman

    2007-12-15

    The authors are developing a computerized pulmonary vessel segmentation method for a computer-aided pulmonary embolism (PE) detection system on computed tomographic pulmonary angiography (CTPA) images. Because PE only occurs inside pulmonary arteries, an automatic and accurate segmentation of the pulmonary vessels in 3D CTPA images is an essential step for the PE CAD system. To segment the pulmonary vessels within the lung, the lung regions are first extracted using expectation-maximization (EM) analysis and morphological operations. The authors developed a 3D multiscale filtering technique to enhance the pulmonary vascular structures based on the analysis of eigenvalues of the Hessian matrix atmore » multiple scales. A new response function of the filter was designed to enhance all vascular structures including the vessel bifurcations and suppress nonvessel structures such as the lymphoid tissues surrounding the vessels. An EM estimation is then used to segment the vascular structures by extracting the high response voxels at each scale. The vessel tree is finally reconstructed by integrating the segmented vessels at all scales based on a 'connected component' analysis. Two CTPA cases containing PEs were used to evaluate the performance of the system. One of these two cases also contained pleural effusion disease. Two experienced thoracic radiologists provided the gold standard of pulmonary vessels including both arteries and veins by manually tracking the arterial tree and marking the center of the vessels using a computer graphical user interface. The accuracy of vessel tree segmentation was evaluated by the percentage of the 'gold standard' vessel center points overlapping with the segmented vessels. The results show that 96.2% (2398/2494) and 96.3% (1910/1984) of the manually marked center points in the arteries overlapped with segmented vessels for the case without and with other lung diseases. For the manually marked center points in all vessels including arteries and veins, the segmentation accuracy are 97.0% (4546/4689) and 93.8% (4439/4732) for the cases without and with other lung diseases, respectively. Because of the lack of ground truth for the vessels, in addition to quantitative evaluation of the vessel segmentation performance, visual inspection was conducted to evaluate the segmentation. The results demonstrate that vessel segmentation using our method can extract the pulmonary vessels accurately and is not degraded by PE occlusion to the vessels in these test cases.« less

  16. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    PubMed

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  17. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  18. A scale space based algorithm for automated segmentation of single shot tagged MRI of shearing deformation.

    PubMed

    Sprengers, Andre M J; Caan, Matthan W A; Moerman, Kevin M; Nederveen, Aart J; Lamerichs, Rolf M; Stoker, Jaap

    2013-04-01

    This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises non-linear scale space for automatic segmentation of single-shot tagged images. The algorithm's ability to automatically segment tagged shearing motion was evaluated in a numerical simulation and in vivo. A typical shearing deformation was simulated in a Shepp-Logan phantom allowing for quantitative evaluation of the algorithm's success rate as a function of both SNR and the amount of deformation. For a qualitative in vivo evaluation tagged images showing deformations in the calf muscles and eye movement in a healthy volunteer were acquired. Both the numerical simulation and the in vivo tagged data demonstrated the algorithm's ability for automated segmentation of single-shot tagged MR provided that SNR of the images is above 10 and the amount of deformation does not exceed the tag spacing. The latter constraint can be met by adjusting the tag delay or the tag spacing. The scale space based algorithm for automatic segmentation of single-shot tagged MR enables the application of tagged MR to complex (shearing) deformation and the processing of datasets with relatively low SNR.

  19. Groping for quantitative digital 3-D image analysis: an approach to quantitative fluorescence in situ hybridization in thick tissue sections of prostate carcinoma.

    PubMed

    Rodenacker, K; Aubele, M; Hutzler, P; Adiga, P S

    1997-01-01

    In molecular pathology numerical chromosome aberrations have been found to be decisive for the prognosis of malignancy in tumours. The existence of such aberrations can be detected by interphase fluorescence in situ hybridization (FISH). The gain or loss of certain base sequences in the desoxyribonucleic acid (DNA) can be estimated by counting the number of FISH signals per cell nucleus. The quantitative evaluation of such events is a necessary condition for a prospective use in diagnostic pathology. To avoid occlusions of signals, the cell nucleus has to be analyzed in three dimensions. Confocal laser scanning microscopy is the means to obtain series of optical thin sections from fluorescence stained or marked material to fulfill the conditions mentioned above. A graphical user interface (GUI) to a software package for display, inspection, count and (semi-)automatic analysis of 3-D images for pathologists is outlined including the underlying methods of 3-D image interaction and segmentation developed. The preparative methods are briefly described. Main emphasis is given to the methodical questions of computer-aided analysis of large 3-D image data sets for pathologists. Several automated analysis steps can be performed for segmentation and succeeding quantification. However tumour material is in contrast to isolated or cultured cells even for visual inspection, a difficult material. For the present a fully automated digital image analysis of 3-D data is not in sight. A semi-automatic segmentation method is thus presented here.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogunovic, Hrvoje; Pozo, Jose Maria; Villa-Uriol, Maria Cruz

    Purpose: To evaluate the suitability of an improved version of an automatic segmentation method based on geodesic active regions (GAR) for segmenting cerebral vasculature with aneurysms from 3D x-ray reconstruction angiography (3DRA) and time of flight magnetic resonance angiography (TOF-MRA) images available in the clinical routine. Methods: Three aspects of the GAR method have been improved: execution time, robustness to variability in imaging protocols, and robustness to variability in image spatial resolutions. The improved GAR was retrospectively evaluated on images from patients containing intracranial aneurysms in the area of the Circle of Willis and imaged with two modalities: 3DRA andmore » TOF-MRA. Images were obtained from two clinical centers, each using different imaging equipment. Evaluation included qualitative and quantitative analyses of the segmentation results on 20 images from 10 patients. The gold standard was built from 660 cross-sections (33 per image) of vessels and aneurysms, manually measured by interventional neuroradiologists. GAR has also been compared to an interactive segmentation method: isointensity surface extraction (ISE). In addition, since patients had been imaged with the two modalities, we performed an intermodality agreement analysis with respect to both the manual measurements and each of the two segmentation methods. Results: Both GAR and ISE differed from the gold standard within acceptable limits compared to the imaging resolution. GAR (ISE) had an average accuracy of 0.20 (0.24) mm for 3DRA and 0.27 (0.30) mm for TOF-MRA, and had a repeatability of 0.05 (0.20) mm. Compared to ISE, GAR had a lower qualitative error in the vessel region and a lower quantitative error in the aneurysm region. The repeatability of GAR was superior to manual measurements and ISE. The intermodality agreement was similar between GAR and the manual measurements. Conclusions: The improved GAR method outperformed ISE qualitatively as well as quantitatively and is suitable for segmenting 3DRA and TOF-MRA images from clinical routine.« less

  1. Segmentation of Polarimetric SAR Images Usig Wavelet Transformation and Texture Features

    NASA Astrophysics Data System (ADS)

    Rezaeian, A.; Homayouni, S.; Safari, A.

    2015-12-01

    Polarimetric Synthetic Aperture Radar (PolSAR) sensors can collect useful observations from earth's surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR) are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT). Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM) and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  2. Automated cell analysis tool for a genome-wide RNAi screen with support vector machine based supervised learning

    NASA Astrophysics Data System (ADS)

    Remmele, Steffen; Ritzerfeld, Julia; Nickel, Walter; Hesser, Jürgen

    2011-03-01

    RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.

  3. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology

    PubMed Central

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767

  4. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.

    PubMed

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.

  5. Technical report on semiautomatic segmentation using the Adobe Photoshop.

    PubMed

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae; Lee, Yong Sook; Har, Dong-Hwan

    2005-12-01

    The purpose of this research is to enable users to semiautomatically segment the anatomical structures in magnetic resonance images (MRIs), computerized tomographs (CTs), and other medical images on a personal computer. The segmented images are used for making 3D images, which are helpful to medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was scanned to make 557 MRIs. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL and manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a similar manner, 13 anatomical structures in 8,590 anatomical images were segmented. Proper segmentation was verified by making 3D images from the segmented images. Semiautomatic segmentation using Adobe Photoshop is expected to be widely used for segmentation of anatomical structures in various medical images.

  6. Ocular Alterations in a Rare Case of Segmental Neurofibromatosis Type 1 with a Non-Classified Mutational Variant of the NF-1 Gene.

    PubMed

    Abdolrahimzadeh, Solmaz; Piraino, Domenica Carmen; Plateroti, Rocco; Scuderi, Gianluca; Recupero, Santi Maria

    2016-06-01

    Neurofibromatosis type 1 (NF-1) is an autsomal dominant disorder which can occasionally result from somatic mosaicism and manifest as segmental forms of the disease. A 37-year-old woman with ascertained NF-1, based on clinical diagnostic criteria and genetic analysis, was referred for ophthalmological evaluation. Genetic analysis, magnetic resonance imaging (MRI), complete ophthalmological examination, and near infrared reflectance (NIR) images at 815 nm of the retina were obtained. Genetic analysis revealed a non-classified mutational variant of the NF-1 gene identified as NM_000267.3:c2084T > C (p.Leu695Pro.T). MRI demonstrated non-symptomatic bilateral optic nerve gliomas. The only cutaneous sign was a subcutaneous neurofibroma of the posterior cervical region. Slit-lamp examination showed bilateral Lisch nodules. NIR images of the retina did not show any choroidal hamartomas. We report a rare case of segmental neurofibromatosis with a non-classified mutational variant of the NF-1 gene described in only one previous case in the literature. The patient presented with clinical features of NF-1 localized to the head and neck region, compatible with diagnosis of segmental NF-1. Interestingly, ocular manifestations included bilateral optic nerve gliomas and Lisch nodules, but no choroidal hamartomas.

  7. Automated vessel shadow segmentation of fovea-centered spectral-domain images from multiple OCT devices

    NASA Astrophysics Data System (ADS)

    Wu, Jing; Gerendas, Bianca S.; Waldstein, Sebastian M.; Simader, Christian; Schmidt-Erfurth, Ursula

    2014-03-01

    Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high reso- lution, three-dimensional (3D) cross sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD) and glaucoma.1 Disease diagnosis, assessment, and treatment requires a patient to undergo multiple OCT scans, possibly using different scanning devices, to accurately and precisely gauge disease activity, progression and treatment success. However, the use of OCT imaging devices from different vendors, combined with patient movement may result in poor scan spatial correlation, potentially leading to incorrect patient diagnosis or treatment analysis. Image registration can be used to precisely compare disease states by registering differing 3D scans to one another. In order to align 3D scans from different time- points and vendors using registration, landmarks are required, the most obvious being the retinal vasculature. Presented here is a fully automated cross-vendor method to acquire retina vessel locations for OCT registration from fovea centred 3D SD-OCT scans based on vessel shadows. Noise filtered OCT scans are flattened based on vendor retinal layer segmentation, to extract the retinal pigment epithelium (RPE) layer of the retina. Voxel based layer profile analysis and k-means clustering is used to extract candidate vessel shadow regions from the RPE layer. In conjunction, the extracted RPE layers are combined to generate a projection image featuring all candidate vessel shadows. Image processing methods for vessel segmentation of the OCT constructed projection image are then applied to optimize the accuracy of OCT vessel shadow segmentation through the removal of false positive shadow regions such as those caused by exudates and cysts. Validation of segmented vessel shadows uses ground truth vessel shadow regions identified by expert graders at the Vienna Reading Center (VRC). The results presented here are intended to show the feasibility of this method for the accurate and precise extraction of suitable retinal vessel shadows from multiple vendor 3D SD-OCT scans for use in intra-vendor and cross-vendor 3D OCT registration, 2D fundus registration and actual retinal vessel segmentation. The resulting percentage of true vessel shadow segments to false positive segments identified by the proposed system compared to mean grader ground truth is 95%.

  8. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M; Woo, B; Kim, J

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automaticallymore » from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.« less

  9. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2017-04-01

    With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.

  10. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; King, J.; Keiser, Jr., D.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  11. Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.

    1994-05-01

    An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.

  12. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  13. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis.

    PubMed

    Debuc, Delia Cabrera; Salinas, Harry M; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M; Puliafito, Carmen A

    2010-01-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 microm and 26.71 microm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 microm and 0.6 and 1.76 microm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R(2)>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  14. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-07-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  15. Automatic blood vessel based-liver segmentation using the portal phase abdominal CT

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2018-02-01

    Liver segmentation is the basis for computer-based planning of hepatic surgical interventions. In diagnosis and analysis of hepatic diseases and surgery planning, automatic segmentation of liver has high importance. Blood vessel (BV) has showed high performance at liver segmentation. In our previous work, we developed a semi-automatic method that segments the liver through the portal phase abdominal CT images in two stages. First stage was interactive segmentation of abdominal blood vessels (ABVs) and subsequent classification into hepatic (HBVs) and non-hepatic (non-HBVs). This stage had 5 interactions that include selective threshold for bone segmentation, selecting two seed points for kidneys segmentation, selection of inferior vena cava (IVC) entrance for starting ABVs segmentation, identification of the portal vein (PV) entrance to the liver and the IVC-exit for classifying HBVs from other ABVs (non-HBVs). Second stage is automatic segmentation of the liver based on segmented ABVs as described in [4]. For full automation of our method we developed a method [5] that segments ABVs automatically tackling the first three interactions. In this paper, we propose full automation of classifying ABVs into HBVs and non- HBVs and consequently full automation of liver segmentation that we proposed in [4]. Results illustrate that the method is effective at segmentation of the liver through the portal abdominal CT images.

  16. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  17. Morphological image analysis for classification of gastrointestinal tissues using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Garcia-Allende, P. Beatriz; Amygdalos, Iakovos; Dhanapala, Hiruni; Goldin, Robert D.; Hanna, George B.; Elson, Daniel S.

    2012-01-01

    Computer-aided diagnosis of ophthalmic diseases using optical coherence tomography (OCT) relies on the extraction of thickness and size measures from the OCT images, but such defined layers are usually not observed in emerging OCT applications aimed at "optical biopsy" such as pulmonology or gastroenterology. Mathematical methods such as Principal Component Analysis (PCA) or textural analyses including both spatial textural analysis derived from the two-dimensional discrete Fourier transform (DFT) and statistical texture analysis obtained independently from center-symmetric auto-correlation (CSAC) and spatial grey-level dependency matrices (SGLDM), as well as, quantitative measurements of the attenuation coefficient have been previously proposed to overcome this problem. We recently proposed an alternative approach consisting of a region segmentation according to the intensity variation along the vertical axis and a pure statistical technology for feature quantification. OCT images were first segmented in the axial direction in an automated manner according to intensity. Afterwards, a morphological analysis of the segmented OCT images was employed for quantifying the features that served for tissue classification. In this study, a PCA processing of the extracted features is accomplished to combine their discriminative power in a lower number of dimensions. Ready discrimination of gastrointestinal surgical specimens is attained demonstrating that the approach further surpasses the algorithms previously reported and is feasible for tissue classification in the clinical setting.

  18. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    PubMed Central

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  19. Automatic detection of hemorrhagic pericardial effusion on PMCT using deep learning - a feasibility study.

    PubMed

    Ebert, Lars C; Heimer, Jakob; Schweitzer, Wolf; Sieberth, Till; Leipner, Anja; Thali, Michael; Ampanozi, Garyfalia

    2017-12-01

    Post mortem computed tomography (PMCT) can be used as a triage tool to better identify cases with a possibly non-natural cause of death, especially when high caseloads make it impossible to perform autopsies on all cases. Substantial data can be generated by modern medical scanners, especially in a forensic setting where the entire body is documented at high resolution. A solution for the resulting issues could be the use of deep learning techniques for automatic analysis of radiological images. In this article, we wanted to test the feasibility of such methods for forensic imaging by hypothesizing that deep learning methods can detect and segment a hemopericardium in PMCT. For deep learning image analysis software, we used the ViDi Suite 2.0. We retrospectively selected 28 cases with, and 24 cases without, hemopericardium. Based on these data, we trained two separate deep learning networks. The first one classified images into hemopericardium/not hemopericardium, and the second one segmented the blood content. We randomly selected 50% of the data for training and 50% for validation. This process was repeated 20 times. The best performing classification network classified all cases of hemopericardium from the validation images correctly with only a few false positives. The best performing segmentation network would tend to underestimate the amount of blood in the pericardium, which is the case for most networks. This is the first study that shows that deep learning has potential for automated image analysis of radiological images in forensic medicine.

  20. Review methods for image segmentation from computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less

  1. Fuzzy pulmonary vessel segmentation in contrast enhanced CT data

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til

    2008-03-01

    Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.

  2. Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.

    NASA Astrophysics Data System (ADS)

    Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.

    2016-04-01

    The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.

  3. An improved approach for the segmentation of starch granules in microscopic images

    PubMed Central

    2010-01-01

    Background Starches are the main storage polysaccharides in plants and are distributed widely throughout plants including seeds, roots, tubers, leaves, stems and so on. Currently, microscopic observation is one of the most important ways to investigate and analyze the structure of starches. The position, shape, and size of the starch granules are the main measurements for quantitative analysis. In order to obtain these measurements, segmentation of starch granules from the background is very important. However, automatic segmentation of starch granules is still a challenging task because of the limitation of imaging condition and the complex scenarios of overlapping granules. Results We propose a novel method to segment starch granules in microscopic images. In the proposed method, we first separate starch granules from background using automatic thresholding and then roughly segment the image using watershed algorithm. In order to reduce the oversegmentation in watershed algorithm, we use the roundness of each segment, and analyze the gradient vector field to find the critical points so as to identify oversegments. After oversegments are found, we extract the features, such as the position and intensity of the oversegments, and use fuzzy c-means clustering to merge the oversegments to the objects with similar features. Experimental results demonstrate that the proposed method can alleviate oversegmentation of watershed segmentation algorithm successfully. Conclusions We present a new scheme for starch granules segmentation. The proposed scheme aims to alleviate the oversegmentation in watershed algorithm. We use the shape information and critical points of gradient vector flow (GVF) of starch granules to identify oversegments, and use fuzzy c-mean clustering based on prior knowledge to merge these oversegments to the objects. Experimental results on twenty microscopic starch images demonstrate the effectiveness of the proposed scheme. PMID:21047380

  4. SU-F-I-11: Software Development for 4D-CBCT Research of Real-Time-Image Gated Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujii, T; Fujii, Y; Shimizu, S

    Purpose: To acquire correct information for inside the body in patient positioning of Real-time-image Gated spot scanning Proton Therapy (RGPT), utilization of tomographic image at exhale phase of patient respiration obtained from 4-dimensional Cone beam CT (4D-CBCT) has been desired. We developed software named “Image Analysis Platform” for 4D-CBCT researches which has technique to segment projection-images based on 3D marker position in the body. The 3D marker position can be obtained by using two axes CBCT system at Hokkaido University Hospital Proton Therapy Center. Performance verification of the software was implemented. Methods: The software calculates 3D marker position retrospectively bymore » using matching positions on pair projection-images obtained by two axes fluoroscopy mode of CBCT system. Log data of 3D marker tracking are outputted after the tracking. By linking the Log data and gantry-angle file of projection-image, all projection-images are equally segmented to spatial five-phases according to marker 3D position of SI direction and saved to specified phase folder. Segmented projection-images are used for CBCT reconstruction of each phase. As performance verification of the software, test of segmented projection-images was implemented for sample CT phantom (Catphan) image acquired by two axes fluoroscopy mode of CBCT. Dummy marker was added on the images. Motion of the marker was modeled to move in 3D space. Motion type of marker is sin4 wave function has amplitude 10.0 mm/5.0 mm/0 mm, cycle 4 s/4 s/0 s for SI/AP/RL direction. Results: The marker was tracked within 0.58 mm accuracy in 3D for all images, and it was confirmed that all projection-images were segmented and saved to each phase folder correctly. Conclusion: We developed software for 4D-CBCT research which can segment projection-image based on 3D marker position. It will be helpful to create high quality of 4D-CBCT reconstruction image for RGPT.« less

  5. Texture segmentation of non-cooperative spacecrafts images based on wavelet and fractal dimension

    NASA Astrophysics Data System (ADS)

    Wu, Kanzhi; Yue, Xiaokui

    2011-06-01

    With the increase of on-orbit manipulations and space conflictions, missions such as tracking and capturing the target spacecrafts are aroused. Unlike cooperative spacecrafts, fixing beacons or any other marks on the targets is impossible. Due to the unknown shape and geometry features of non-cooperative spacecraft, in order to localize the target and obtain the latitude, we need to segment the target image and recognize the target from the background. The data and errors during the following procedures such as feature extraction and matching can also be reduced. Multi-resolution analysis of wavelet theory reflects human beings' recognition towards images from low resolution to high resolution. In addition, spacecraft is the only man-made object in the image compared to the natural background and the differences will be certainly observed between the fractal dimensions of target and background. Combined wavelet transform and fractal dimension, in this paper, we proposed a new segmentation algorithm for the images which contains complicated background such as the universe and planet surfaces. At first, Daubechies wavelet basis is applied to decompose the image in both x axis and y axis, thus obtain four sub-images. Then, calculate the fractal dimensions in four sub-images using different methods; after analyzed the results of fractal dimensions in sub-images, we choose Differential Box Counting in low resolution image as the principle to segment the texture which has the greatest divergences between different sub-images. This paper also presents the results of experiments by using the algorithm above. It is demonstrated that an accurate texture segmentation result can be obtained using the proposed technique.

  6. A geometric level set model for ultrasounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarti, A.; Malladi, R.

    We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.

  7. MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.

    PubMed

    Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris

    2017-05-01

    Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.

  8. In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images

    NASA Astrophysics Data System (ADS)

    Nillesen, M. M.; Lopata, R. G. P.; de Boode, W. P.; Gerrits, I. H.; Huisman, H. J.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.

    2009-04-01

    Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was validated quantitatively by comparing it with the CO values measured from the volume flow in the pulmonary artery. Relative bias varied between 0 and -17%, where the nominal accuracy of the flow meter is in the order of 10%. Assuming the CO measurements from the flow probe as a gold standard, excellent correlation (r = 0.99) was observed with the CO estimates obtained from image segmentation.

  9. A hybrid segmentation approach for geographic atrophy in fundus auto-fluorescence images for diagnosis of age-related macular degeneration.

    PubMed

    Lee, Noah; Laine, Andrew F; Smith, R Theodore

    2007-01-01

    Fundus auto-fluorescence (FAF) images with hypo-fluorescence indicate geographic atrophy (GA) of the retinal pigment epithelium (RPE) in age-related macular degeneration (AMD). Manual quantification of GA is time consuming and prone to inter- and intra-observer variability. Automatic quantification is important for determining disease progression and facilitating clinical diagnosis of AMD. In this paper we describe a hybrid segmentation method for GA quantification by identifying hypo-fluorescent GA regions from other interfering retinal vessel structures. First, we employ background illumination correction exploiting a non-linear adaptive smoothing operator. Then, we use the level set framework to perform segmentation of hypo-fluorescent areas. Finally, we present an energy function combining morphological scale-space analysis with a geometric model-based approach to perform segmentation refinement of false positive hypo- fluorescent areas due to interfering retinal structures. The clinically apparent areas of hypo-fluorescence were drawn by an expert grader and compared on a pixel by pixel basis to our segmentation results. The mean sensitivity and specificity of the ROC analysis were 0.89 and 0.98%.

  10. Image Segmentation for Connectomics Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tasdizen, Tolga; Seyedhosseini, Mojtaba; Liu, TIng

    Reconstruction of neural circuits at the microscopic scale of individual neurons and synapses, also known as connectomics, is an important challenge for neuroscience. While an important motivation of connectomics is providing anatomical ground truth for neural circuit models, the ability to decipher neural wiring maps at the individual cell level is also important in studies of many neurodegenerative diseases. Reconstruction of a neural circuit at the individual neuron level requires the use of electron microscopy images due to their extremely high resolution. Computational challenges include pixel-by-pixel annotation of these images into classes such as cell membrane, mitochondria and synaptic vesiclesmore » and the segmentation of individual neurons. State-of-the-art image analysis solutions are still far from the accuracy and robustness of human vision and biologists are still limited to studying small neural circuits using mostly manual analysis. In this chapter, we describe our image analysis pipeline that makes use of novel supervised machine learning techniques to tackle this problem.« less

  11. High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing

    PubMed Central

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-01-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156

  12. Automatic tissue segmentation of breast biopsies imaged by QPI

    NASA Astrophysics Data System (ADS)

    Majeed, Hassaan; Nguyen, Tan; Kandel, Mikhail; Marcias, Virgilia; Do, Minh; Tangella, Krishnarao; Balla, Andre; Popescu, Gabriel

    2016-03-01

    The current tissue evaluation method for breast cancer would greatly benefit from higher throughput and less inter-observer variation. Since quantitative phase imaging (QPI) measures physical parameters of tissue, it can be used to find quantitative markers, eliminating observer subjectivity. Furthermore, since the pixel values in QPI remain the same regardless of the instrument used, classifiers can be built to segment various tissue components without need for color calibration. In this work we use a texton-based approach to segment QPI images of breast tissue into various tissue components (epithelium, stroma or lumen). A tissue microarray comprising of 900 unstained cores from 400 different patients was imaged using Spatial Light Interference Microscopy. The training data were generated by manually segmenting the images for 36 cores and labelling each pixel (epithelium, stroma or lumen.). For each pixel in the data, a response vector was generated by the Leung-Malik (LM) filter bank and these responses were clustered using the k-means algorithm to find the centers (called textons). A random forest classifier was then trained to find the relationship between a pixel's label and the histogram of these textons in that pixel's neighborhood. The segmentation was carried out on the validation set by calculating the texton histogram in a pixel's neighborhood and generating a label based on the model learnt during training. Segmentation of the tissue into various components is an important step toward efficiently computing parameters that are markers of disease. Automated segmentation, followed by diagnosis, can improve the accuracy and speed of analysis leading to better health outcomes.

  13. Poster — Thur Eve — 59: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallawi, A; Farrell, T; Diamond, K

    2014-08-15

    Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less

  14. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  15. Fast localization of optic disc and fovea in retinal images for eye disease screening

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Echegaray, S.; Pattichis, M.; Zamora, G.; Bauman, W.; Soliz, P.

    2011-03-01

    Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and "gold standard" is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.

  16. Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method

    PubMed Central

    Chen, Meizhu; Li, Jichun; Zhang, Encai

    2017-01-01

    As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122

  17. Semantic segmentation of mFISH images using convolutional networks.

    PubMed

    Pardo, Esteban; Morgado, José Mário T; Malpica, Norberto

    2018-04-30

    Multicolor in situ hybridization (mFISH) is a karyotyping technique used to detect major chromosomal alterations using fluorescent probes and imaging techniques. Manual interpretation of mFISH images is a time consuming step that can be automated using machine learning; in previous works, pixel or patch wise classification was employed, overlooking spatial information which can help identify chromosomes. In this work, we propose a fully convolutional semantic segmentation network for the interpretation of mFISH images, which uses both spatial and spectral information to classify each pixel in an end-to-end fashion. The semantic segmentation network developed was tested on samples extracted from a public dataset using cross validation. Despite having no labeling information of the image it was tested on, our algorithm yielded an average correct classification ratio (CCR) of 87.41%. Previously, this level of accuracy was only achieved with state of the art algorithms when classifying pixels from the same image in which the classifier has been trained. These results provide evidence that fully convolutional semantic segmentation networks may be employed in the computer aided diagnosis of genetic diseases with improved performance over the current image analysis methods. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.

  18. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  19. An interactive medical image segmentation framework using iterative refinement.

    PubMed

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. An artifacts removal post-processing for epiphyseal region-of-interest (EROI) localization in automated bone age assessment (BAA)

    PubMed Central

    2011-01-01

    Background Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction. Methods A proposed method with anisotropic diffusion as pre-processing and a novel Bounded Area Elimination (BAE) post-processing algorithm to improve the algorithm of ossification site localization technique are designed with the intent of improving the adaptive segmentation result and the region-of interest (ROI) localization accuracy. Results The results are then evaluated by quantitative analysis and qualitative analysis using texture feature evaluation. The result indicates that the image homogeneity after anisotropic diffusion has improved averagely on each age group for 17.59%. Results of experiments showed that the smoothness has been improved averagely 35% after BAE algorithm and the improvement of ROI localization has improved for averagely 8.19%. The MSSIM has improved averagely 10.49% after performing the BAE algorithm on the adaptive segmented hand radiograph. Conclusions The result indicated that hand radiographs which have undergone anisotropic diffusion have greatly reduced the noise in the segmented image and the result as well indicated that the BAE algorithm proposed is capable of removing the artifacts generated in adaptive segmentation. PMID:21952080

  1. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  2. Kinetic magnetic resonance imaging analysis of abnormal segmental motion of the functional spine unit.

    PubMed

    Kong, Min Ho; Hymanson, Henry J; Song, Kwan Young; Chin, Dong Kyu; Cho, Yong Eun; Yoon, Do Heum; Wang, Jeffrey C

    2009-04-01

    The authors conducted a retrospective observational study using kinetic MR imaging to investigate the relationship between instability, abnormal sagittal segmental motion, and radiographic variables consisting of intervertebral disc degeneration, facet joint osteoarthritis (FJO), degeneration of the interspinous ligaments, ligamentum flavum hypertrophy (LFH), and the status of the paraspinal muscles. Abnormal segmental motion, defined as > 10 degrees angulation and > 3 mm of translation in the sagittal plane, was investigated in 1575 functional spine units (315 patients) in flexion, neutral, and extension postures using kinetic MR imaging. Each segment was assessed based on the extent of disc degeneration (Grades I-V), FJO (Grades 1-4), interspinous ligament degeneration (Grades 1-4), presence of LFH, and paraspinal muscle fatty infiltration observed on kinetic MR imaging. These factors are often noted in patients with degenerative disease, and there are grading systems to describe these changes. For the first time, the authors attempted to address the relationship between these radiographic observations and the effects on the motion and instability of the functional spine unit. The prevalence of abnormal translational motion was significantly higher in patients with Grade IV degenerative discs and Grade 3 arthritic facet joints (p < 0.05). In patients with advanced disc degeneration and FJO, there was a lesser amount of motion in both segmental translation and angulation when compared with lower grades of degeneration, and this difference was statistically significant for angular motion (p < 0.05). Patients with advanced degenerative Grade 4 facet joint arthritis had a significantly lower percentage of abnormal angular motion compared to patients with normal facet joints (p < 0.001). The presence of LFH was strongly associated with abnormal translational and angular motion. Grade 4 interspinous ligament degeneration and the presence of paraspinal muscle fatty infiltration were both significantly associated with excessive abnormal angular motion (p < 0.05). This kinetic MR imaging analysis showed that the lumbar functional unit with more disc degeneration, FJO, and LFH had abnormal sagittal plane translation and angulation. These findings suggest that abnormal segmental motion noted on kinetic MR images is closely associated with disc degeneration, FJO, and the pathological characteristics of interspinous ligaments, ligamentum flavum, and paraspinal muscles. Kinetic MR imaging in patients with mechanical back pain may prove a valuable source of information about the stability of the functional spine unit by measuring abnormal segmental motion and grading of radiographic parameters simultaneously.

  3. Segmentation of the ovine lung in 3D CT Images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  4. Image analysis of ocular fundus for retinopathy characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for imagemore » enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.« less

  5. Asymmetry and irregularity border as discrimination factor between melanocytic lesions

    NASA Astrophysics Data System (ADS)

    Sbrissa, David; Pratavieira, Sebastião.; Salvio, Ana Gabriela; Kurachi, Cristina; Bagnato, Vanderlei Salvadori; Costa, Luciano Da Fontoura; Travieso, Gonzalo

    2015-06-01

    Image processing tools have been widely used in systems supporting medical diagnosis. The use of mobile devices for the diagnosis of melanoma can assist doctors and improve their diagnosis of a melanocytic lesion. This study proposes a method of image analysis for melanoma discrimination from other types of melanocytic lesions, such as regular and atypical nevi. The process is based on extracting features related with asymmetry and border irregularity. It were collected 104 images, from medical database of two years. The images were obtained with standard digital cameras without lighting and scale control. Metrics relating to the characteristics of shape, asymmetry and curvature of the contour were extracted from segmented images. Linear Discriminant Analysis was performed for dimensionality reduction and data visualization. Segmentation results showed good efficiency in the process, with approximately 88:5% accuracy. Validation results presents sensibility and specificity 85% and 70% for melanoma detection, respectively.

  6. Microscopic image analysis for reticulocyte based on watershed algorithm

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Liu, G. F.; Liu, J. G.; Wang, G.

    2007-12-01

    We present a watershed-based algorithm in the analysis of light microscopic image for reticulocyte (RET), which will be used in an automated recognition system for RET in peripheral blood. The original images, obtained by micrography, are segmented by modified watershed algorithm and are recognized in term of gray entropy and area of connective area. In the process of watershed algorithm, judgment conditions are controlled according to character of the image, besides, the segmentation is performed by morphological subtraction. The algorithm was simulated with MATLAB software. It is similar for automated and manual scoring and there is good correlation(r=0.956) between the methods, which is resulted from 50 pieces of RET images. The result indicates that the algorithm for peripheral blood RETs is comparable to conventional manual scoring, and it is superior in objectivity. This algorithm avoids time-consuming calculation such as ultra-erosion and region-growth, which will speed up the computation consequentially.

  7. Subcortical structure segmentation using probabilistic atlas priors

    NASA Astrophysics Data System (ADS)

    Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido

    2007-03-01

    The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.

  8. Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels.

    PubMed

    Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong

    2018-04-01

    Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation

    NASA Astrophysics Data System (ADS)

    Sakamoto, M.; Honda, Y.; Kondo, A.

    2016-06-01

    From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.

  10. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation.

    PubMed

    Wang, Shuo; Zhou, Mu; Liu, Zaiyi; Liu, Zhenyu; Gu, Dongsheng; Zang, Yali; Dong, Di; Gevaert, Olivier; Tian, Jie

    2017-08-01

    Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%. Copyright © 2017. Published by Elsevier B.V.

  11. Fully automatic left ventricular myocardial strain estimation in 2D short-axis tagged magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Morais, Pedro; Queirós, Sandro; Heyde, Brecht; Engvall, Jan; 'hooge, Jan D.; Vilaça, João L.

    2017-09-01

    Cardiovascular diseases are among the leading causes of death and frequently result in local myocardial dysfunction. Among the numerous imaging modalities available to detect these dysfunctional regions, cardiac deformation imaging through tagged magnetic resonance imaging (t-MRI) has been an attractive approach. Nevertheless, fully automatic analysis of these data sets is still challenging. In this work, we present a fully automatic framework to estimate left ventricular myocardial deformation from t-MRI. This strategy performs automatic myocardial segmentation based on B-spline explicit active surfaces, which are initialized using an annular model. A non-rigid image-registration technique is then used to assess myocardial deformation. Three experiments were set up to validate the proposed framework using a clinical database of 75 patients. First, automatic segmentation accuracy was evaluated by comparing against manual delineations at one specific cardiac phase. The proposed solution showed an average perpendicular distance error of 2.35  ±  1.21 mm and 2.27  ±  1.02 mm for the endo- and epicardium, respectively. Second, starting from either manual or automatic segmentation, myocardial tracking was performed and the resulting strain curves were compared. It is shown that the automatic segmentation adds negligible differences during the strain-estimation stage, corroborating its accuracy. Finally, segmental strain was compared with scar tissue extent determined by delay-enhanced MRI. The results proved that both strain components were able to distinguish between normal and infarct regions. Overall, the proposed framework was shown to be accurate, robust, and attractive for clinical practice, as it overcomes several limitations of a manual analysis.

  12. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

  13. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Wang, J; Zhang, H

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  14. Using deep learning in image hyper spectral segmentation, classification, and detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Su, Zhenyu

    2018-02-01

    Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.

  15. Complete grain boundaries from incomplete EBSD maps: the influence of segmentation on grain size determinations

    NASA Astrophysics Data System (ADS)

    Heilbronner, Renée; Kilian, Ruediger

    2017-04-01

    Grain size analyses are carried out for a number of reasons, for example, the dynamically recrystallized grain size of quartz is used to assess the flow stresses during deformation. Typically a thin section or polished surface is used. If the expected grain size is large enough (10 µm or larger), the images can be obtained on a light microscope, if the grain size is smaller, the SEM is used. The grain boundaries are traced (the process is called segmentation and can be done manually or via image processing) and the size of the cross sectional areas (segments) is determined. From the resulting size distributions, 'the grain size' or 'average grain size', usually a mean diameter or similar, is derived. When carrying out such grain size analyses, a number of aspects are critical for the reproducibility of the result: the resolution of the imaging equipment (light microscope or SEM), the type of images that are used for segmentation (cross polarized, partial or full orientation images, CIP versus EBSD), the segmentation procedure (algorithm) itself, the quality of the segmentation and the mathematical definition and calculation of 'the average grain size'. The quality of the segmentation depends very strongly on the criteria that are used for identifying grain boundaries (for example, angles of misorientation versus shape considerations), on pre- and post-processing (filtering) and on the quality of the recorded images (most notably on the indexing ratio). In this contribution, we consider experimentally deformed Black Hills quartzite with dynamically re-crystallized grain sizes in the range of 2 - 15 µm. We compare two basic methods of segmentations of EBSD maps (orientation based versus shape based) and explore how the choice of methods influences the result of the grain size analysis. We also compare different measures for grain size (mean versus mode versus RMS, and 2D versus 3D) in order to determine which of the definitions of 'average grain size yields the most stable results.

  16. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    PubMed Central

    Karaçalı, Bilge; Vamvakidou, Alexandra P; Tözeren, Aydın

    2007-01-01

    Background Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Methods Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Results Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Conclusion Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development. PMID:17822559

  17. Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing

    1999-05-01

    Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.

  18. Development of methods for analysis of knee articular cartilage degeneration by magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Suponenkovs, Artjoms; Glazs, Aleksandrs; Platkajis, Ardis

    2017-03-01

    The aim of this paper is to describe the new methods for analyzing knee articular cartilage degeneration. The most important aspects regarding research about magnetic resonance imaging, knee joint anatomy, stages of knee osteoarthritis, medical image segmentation and relaxation times calculation. This paper proposes new methods for relaxation times calculation and medical image segmentation. The experimental part describes the most important aspect regarding analysing of articular cartilage relaxation times changing. This part contains experimental results, which show the codependence between relaxation times and organic structure. These experimental results and proposed methods can be helpful for early osteoarthritis diagnostics.

  19. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  20. Objective measurement of accommodative biometric changes using ultrasound biomicroscopy

    PubMed Central

    Ramasubramanian, Viswanathan; Glasser, Adrian

    2015-01-01

    PURPOSE To demonstrate that ultrasound biomicroscopy (UBM) can be used for objective quantitative measurements of anterior segment accommodative changes. SETTING College of Optometry, University of Houston, Houston, Texas, USA. DESIGN Prospective cross-sectional study. METHODS Anterior segment biometric changes in response to 0 to 6.0 diopters (D) of accommodative stimuli in 1.0 D steps were measured in eyes of human subjects aged 21 to 36 years. Imaging was performed in the left eye using a 35 MHz UBM (Vumax) and an A-scan ultrasound (A-5500) while the right eye viewed the accommodative stimuli. An automated Matlab image-analysis program was developed to measure the biometry parameters from the UBM images. RESULTS The UBM-measured accommodative changes in anterior chamber depth (ACD), lens thickness, anterior lens radius of curvature, posterior lens radius of curvature, and anterior segment length were statistically significantly (P < .0001) linearly correlated with accommodative stimulus amplitudes. Standard deviations of the UBM-measured parameters were independent of the accommodative stimulus demands (ACD 0.0176 mm, lens thickness 0.0294 mm, anterior lens radius of curvature 0.3350 mm, posterior lens radius of curvature 0.1580 mm, and anterior segment length 0.0340 mm). The mean difference between the A-scan and UBM measurements was −0.070 mm for ACD and 0.166 mm for lens thickness. CONCLUSIONS Accommodating phakic eyes imaged using UBM allowed visualization of the accommodative response, and automated image analysis of the UBM images allowed reliable, objective, quantitative measurements of the accommodative intraocular biometric changes. PMID:25804579

Top