Science.gov

Sample records for anatomy segmentation algorithm

  1. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  2. Performance Evaluation of Automatic Anatomy Segmentation Algorithm on Repeat or Four-Dimensional Computed Tomography Images Using Deformable Image Registration Method

    SciTech Connect

    Wang He; Garden, Adam S.; Zhang Lifei; Wei Xiong; Ahamad, Anesa; Kuban, Deborah A.; Komaki, Ritsuko; O'Daniel, Jennifer; Zhang Yongbin; Mohan, Radhe; Dong Lei

    2008-09-01

    Purpose: Auto-propagation of anatomic regions of interest from the planning computed tomography (CT) scan to the daily CT is an essential step in image-guided adaptive radiotherapy. The goal of this study was to quantitatively evaluate the performance of the algorithm in typical clinical applications. Methods and Materials: We had previously adopted an image intensity-based deformable registration algorithm to find the correspondence between two images. In the present study, the regions of interest delineated on the planning CT image were mapped onto daily CT or four-dimensional CT images using the same transformation. Postprocessing methods, such as boundary smoothing and modification, were used to enhance the robustness of the algorithm. Auto-propagated contours for 8 head-and-neck cancer patients with a total of 100 repeat CT scans, 1 prostate patient with 24 repeat CT scans, and 9 lung cancer patients with a total of 90 four-dimensional CT images were evaluated against physician-drawn contours and physician-modified deformed contours using the volume overlap index and mean absolute surface-to-surface distance. Results: The deformed contours were reasonably well matched with the daily anatomy on the repeat CT images. The volume overlap index and mean absolute surface-to-surface distance was 83% and 1.3 mm, respectively, compared with the independently drawn contours. Better agreement (>97% and <0.4 mm) was achieved if the physician was only asked to correct the deformed contours. The algorithm was also robust in the presence of random noise in the image. Conclusion: The deformable algorithm might be an effective method to propagate the planning regions of interest to subsequent CT images of changed anatomy, although a final review by physicians is highly recommended.

  3. Automatic segmentation of intra-cochlear anatomy in post-implantation CT

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Dawant, Benoit M.; McRackan, Theodore R.; Labadie, Robert F.; Noble, Jack H.

    2013-03-01

    A cochlear implant (CI) is a neural prosthetic device that restores hearing by directly stimulating the auditory nerve with an electrode array. In CI surgery, the surgeon threads the electrode array into the cochlea, blind to internal structures. We have recently developed algorithms for determining the position of CI electrodes relative to intra-cochlear anatomy using pre- and post-implantation CT. We are currently using this approach to develop a CI programming assistance system that uses knowledge of electrode position to determine a patient-customized CI sound processing strategy. However, this approach cannot be used for the majority of CI users because the cochlea is obscured by image artifacts produced by CI electrodes and acquisition of pre-implantation CT is not universal. In this study we propose an approach that extends our techniques so that intra-cochlear anatomy can be segmented for CI users for which pre-implantation CT was not acquired. The approach achieves automatic segmentation of intra-cochlear anatomy in post-implantation CT by exploiting intra-subject symmetry in cochlear anatomy across ears. We validated our approach on a dataset of 10 ears in which both pre- and post-implantation CTs were available. Our approach results in mean and maximum segmentation errors of 0.27 and 0.62 mm, respectively. This result suggests that our automatic segmentation approach is accurate enough for developing customized CI sound processing strategies for unilateral CI patients based solely on postimplantation CT scans.

  4. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    PubMed

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-01-01

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction. PMID:25207393

  5. Reliability measure for segmenting algorithms

    NASA Astrophysics Data System (ADS)

    Alvarez, Robert E.

    2004-05-01

    Segmenting is a key initial step in many computer-aided detection (CAD) systems. Our purpose is to develop a method to estimate the reliability of segmenting algorithm results. We use a statistical shape model computed using principal component analysis. The model retains a small number of eigenvectors, or modes, that represent a large fraction of the variance. The residuals between the segmenting result and its projection into the space of retained modes are computed. The sum of the squares of residuals is transformed to a zero-mean, unit standard deviation Gaussian random variable. We also use the standardized scale parameter. The reliability measure is the probability that the transformed residuals and scale parameter are greater than the absolute value of the observed values. We tested the reliability measure with thirty chest x-ray images with "leave-out-one" testing. The Gaussian assumption was verified using normal probability plots. For each image, a statistical shape model was computed from the hand-digitized data of the rest of the images in the training set. The residuals and scale parameter with automated segment results for the image were used to compute the reliability measure in each case. The reliability measure was significantly lower for two images in the training set with unusual lung fields or processing errors. The data and Matlab scripts for reproducing the figures are at http://www.aprendtech.com/papers/relmsr.zip Errors detected by the new reliability measure can be used to adjust processing or warn the user.

  6. Masseter segmentation using an improved watershed algorithm with unsupervised classification.

    PubMed

    Ng, H P; Ong, S H; Foong, K W C; Goh, P S; Nowinski, W L

    2008-02-01

    The watershed algorithm always produces a complete division of the image. However, it is susceptible to over-segmentation and sensitivity to false edges. In medical images this leads to unfavorable representations of the anatomy. We address these drawbacks by introducing automated thresholding and post-segmentation merging. The automated thresholding step is based on the histogram of the gradient magnitude map while post-segmentation merging is based on a criterion which measures the similarity in intensity values between two neighboring partitions. Our improved watershed algorithm is able to merge more than 90% of the initial partitions, which indicates that a large amount of over-segmentation has been reduced. To further improve the segmentation results, we make use of K-means clustering to provide an initial coarse segmentation of the highly textured image before the improved watershed algorithm is applied to it. When applied to the segmentation of the masseter from 60 magnetic resonance images of 10 subjects, the proposed algorithm achieved an overlap index (kappa) of 90.6%, and was able to merge 98% of the initial partitions on average. The segmentation results are comparable to those obtained using the gradient vector flow snake. PMID:17950265

  7. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  8. An algorithm for segmenting polarimetric SAR imagery

    NASA Astrophysics Data System (ADS)

    Geaga, Jorge V.

    2015-05-01

    We have developed an algorithm for segmenting fully polarimetric single look TerraSAR-X, multilook SIR-C and 7 band Landsat 5 imagery using neural nets. The algorithm uses a feedforward neural net with one hidden layer to segment different surface classes. The weights are refined through an iterative filtering process characteristic of a relaxation process. Features selected from studies of fully polarimetric complex single look TerraSAR-X data and multilook SIR-C data are used as input to the net. The seven bands from Landsat 5 data are used as input for the Landsat neural net. The Cloude-Pottier incoherent decomposition is used to investigate the physical basis of the polarimetric SAR data segmentation. The segmentation of a SIR-C ocean surface scene into four classes is presented. This segmentation algorithm could be a very useful tool for investigating complex polarimetric SAR phenomena.

  9. Fuzzy watershed segmentation algorithm: an enhanced algorithm for 2D gel electrophoresis image segmentation.

    PubMed

    Rashwan, Shaheera; Sarhan, Amany; Faheem, Muhamed Talaat; Youssef, Bayumy A

    2015-01-01

    Detection and quantification of protein spots is an important issue in the analysis of two-dimensional electrophoresis images. However, there is a main challenge in the segmentation of 2DGE images which is to separate overlapping protein spots correctly and to find the weak protein spots. In this paper, we describe a new robust technique to segment and model the different spots present in the gels. The watershed segmentation algorithm is modified to handle the problem of over-segmentation by initially partitioning the image to mosaic regions using the composition of fuzzy relations. The experimental results showed the effectiveness of the proposed algorithm to overcome the over segmentation problem associated with the available algorithm. We also use a wavelet denoising function to enhance the quality of the segmented image. The results of using a denoising function before the proposed fuzzy watershed segmentation algorithm is promising as they are better than those without denoising. PMID:26510287

  10. Heart region segmentation from low-dose CT scans: an anatomy based approach

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Biancardi, Alberto M.; Yankelevitz, David F.; Cham, Matthew D.; Henschke, Claudia I.

    2012-02-01

    Cardiovascular disease is a leading cause of death in developed countries. The concurrent detection of heart diseases during low-dose whole-lung CT scans (LDCT), typically performed as part of a screening protocol, hinges on the accurate quantification of coronary calcification. The creation of fully automated methods is ideal as complete manual evaluation is imprecise, operator dependent, time consuming and thus costly. The technical challenges posed by LDCT scans in this context are mainly twofold. First, there is a high level image noise arising from the low radiation dose technique. Additionally, there is a variable amount of cardiac motion blurring due to the lack of electrocardiographic gating and the fact that heart rates differ between human subjects. As a consequence, the reliable segmentation of the heart, the first stage toward the implementation of morphologic heart abnormality detection, is also quite challenging. An automated computer method based on a sequential labeling of major organs and determination of anatomical landmarks has been evaluated on a public database of LDCT images. The novel algorithm builds from a robust segmentation of the bones and airways and embodies a stepwise refinement starting at the top of the lungs where image noise is at its lowest and where the carina provides a good calibration landmark. The segmentation is completed at the inferior wall of the heart where extensive image noise is accommodated. This method is based on the geometry of human anatomy and does not involve training through manual markings. Using visual inspection by an expert reader as a gold standard, the algorithm achieved successful heart and major vessel segmentation in 42 of 45 low-dose CT images. In the 3 remaining cases, the cardiac base was over segmented due to incorrect hemidiaphragm localization.

  11. Medical image segmentation using genetic algorithms.

    PubMed

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation. PMID:19272859

  12. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  13. Segmentation precision of abdominal anatomy for MRI-based radiotherapy.

    PubMed

    Noel, Camille E; Zhu, Fan; Lee, Andrew Y; Yanle, Hu; Parikh, Parag J

    2014-01-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC(intraobserver) = 0.89 ± 0.12, HD(intraobserver) = 3.6mm ± 1.5, DC(interobserver) = 0.89 ± 0.15, and HD(interobserver) = 3.2mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy. PMID:24726701

  14. Segmentation precision of abdominal anatomy for MRI-based radiotherapy

    SciTech Connect

    Noel, Camille E.; Zhu, Fan; Lee, Andrew Y.; Yanle, Hu; Parikh, Parag J.

    2014-10-01

    The limited soft tissue visualization provided by computed tomography, the standard imaging modality for radiotherapy treatment planning and daily localization, has motivated studies on the use of magnetic resonance imaging (MRI) for better characterization of treatment sites, such as the prostate and head and neck. However, no studies have been conducted on MRI-based segmentation for the abdomen, a site that could greatly benefit from enhanced soft tissue targeting. We investigated the interobserver and intraobserver precision in segmentation of abdominal organs on MR images for treatment planning and localization. Manual segmentation of 8 abdominal organs was performed by 3 independent observers on MR images acquired from 14 healthy subjects. Observers repeated segmentation 4 separate times for each image set. Interobserver and intraobserver contouring precision was assessed by computing 3-dimensional overlap (Dice coefficient [DC]) and distance to agreement (Hausdorff distance [HD]) of segmented organs. The mean and standard deviation of intraobserver and interobserver DC and HD values were DC{sub intraobserver} = 0.89 ± 0.12, HD{sub intraobserver} = 3.6 mm ± 1.5, DC{sub interobserver} = 0.89 ± 0.15, and HD{sub interobserver} = 3.2 mm ± 1.4. Overall, metrics indicated good interobserver/intraobserver precision (mean DC > 0.7, mean HD < 4 mm). Results suggest that MRI offers good segmentation precision for abdominal sites. These findings support the utility of MRI for abdominal planning and localization, as emerging MRI technologies, techniques, and onboard imaging devices are beginning to enable MRI-based radiotherapy.

  15. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    SciTech Connect

    Chen, Xinjian; Bagci, Ulas

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and

  16. CT segmentation of dental shapes by anatomy-driven reformation imaging and B-spline modelling.

    PubMed

    Barone, S; Paoli, A; Razionale, A V

    2016-06-01

    Dedicated imaging methods are among the most important tools of modern computer-aided medical applications. In the last few years, cone beam computed tomography (CBCT) has gained popularity in digital dentistry for 3D imaging of jawbones and teeth. However, the anatomy of a maxillofacial region complicates the assessment of tooth geometry and anatomical location when using standard orthogonal views of the CT data set. In particular, a tooth is defined by a sub-region, which cannot be easily separated from surrounding tissues by only considering pixel grey-intensity values. For this reason, an image enhancement is usually necessary in order to properly segment tooth geometries. In this paper, an anatomy-driven methodology to reconstruct individual 3D tooth anatomies by processing CBCT data is presented. The main concept is to generate a small set of multi-planar reformation images along significant views for each target tooth, driven by the individual anatomical geometry of a specific patient. The reformation images greatly enhance the clearness of the target tooth contours. A set of meaningful 2D tooth contours is extracted and used to automatically model the overall 3D tooth shape through a B-spline representation. The effectiveness of the methodology has been verified by comparing some anatomy-driven reconstructions of anterior and premolar teeth with those obtained by using standard tooth segmentation tools. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26418417

  17. Automatic lobar segmentation for diseased lungs using an anatomy-based priority knowledge in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang Joon; Kim, Jung Im; Goo, Jin Mo; Lee, Doohee

    2014-03-01

    Lung lobar segmentation in CT images is a challenging tasks because of the limitations in image quality inherent to CT image acquisition, especially low-dose CT for clinical routine environment. Besides, complex anatomy and abnormal lesions in the lung parenchyma makes segmentation difficult because contrast in CT images are determined by the differential absorption of X-rays by neighboring structures, such as tissue, vessel or several pathological conditions. Thus, we attempted to develop a robust segmentation technique for normal and diseased lung parenchyma. The images were obtained with low-dose chest CT using soft reconstruction kernel (Sensation 16, Siemens, Germany). Our PC-based in-house software segmented bronchial trees and lungs with intensity adaptive region-growing technique. Then the horizontal and oblique fissures were detected by using eigenvalues-ratio of the Hessian matrix in the lung regions which were excluded from airways and vessels. To enhance and recover the faithful 3-D fissure plane, our proposed fissure enhancing scheme were applied to the images. After finishing above steps, for careful smoothening of fissure planes, 3-D rolling-ball algorithm in xyz planes were performed. Results show that success rate of our proposed scheme was achieved up to 89.5% in the diseased lung parenchyma.

  18. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  19. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  20. Research of the multimodal brain-tumor segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Yisu; Chen, Wufan

    2015-12-01

    It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. A new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain tumor images, we developed the algorithm to segment multimodal brain tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated and compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance.

  1. Automatic registration and segmentation algorithm for multiple electrophoresis images

    NASA Astrophysics Data System (ADS)

    Baker, Matthew S.; Busse, Harald; Vogt, Martin

    2000-06-01

    We present an algorithm for registering, segmenting and quantifying multiple scanned electrophoresis images. (2D gel) Electrophoresis is a technique for separating proteins or other macromolecules in organic material according to net charge and molecular mass and results in scanned grayscale images with dark spots against a light background marking the presence of such macromolecules. The algorithm begins by registering each of the images using a non-rigid registration algorithm. The registered images are then jointly segmented using a Markov random field approach to obtain a single segmentation. By using multiple images, the effect of noise is greatly reduced. We demonstrate the algorithm on several sets of real data.

  2. Segmentation of kidney using C-V model and anatomy priors

    NASA Astrophysics Data System (ADS)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  3. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  4. Optimized mean shift algorithm for color segmentation in image sequences

    NASA Astrophysics Data System (ADS)

    Bailer, Werner; Schallauer, Peter; Haraldsson, Harald B.; Rehatschek, Herwig

    2005-03-01

    The application of the mean shift algorithm to color image segmentation has been proposed in 1997 by Comaniciu and Meer. We apply the mean shift color segmentation to image sequences, as the first step of a moving object segmentation algorithm. Previous work has shown that it is well suited for this task, because it provides better temporal stability of the segmentation result than other approaches. The drawback is higher computational cost. For speed up of processing on image sequences we exploit the fact that subsequent frames are similar and use the cluster centers of previous frames as initial estimates, which also enhances spatial segmentation continuity. In contrast to other implementations we use the originally proposed CIE LUV color space to ensure high quality segmentation results. We show that moderate quantization of the input data before conversion to CIE LUV has little influence on the segmentation quality but results in significant speed up. We also propose changes in the post-processing step to increase the temporal stability of border pixels. We perform objective evaluation of the segmentation results to compare the original algorithm with our modified version. We show that our optimized algorithm reduces processing time and increases the temporal stability of the segmentation.

  5. A novel iris segmentation algorithm based on small eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Harish, B. S.; Aruna Kumar, S. V.; Guru, D. S.; Ngo, Minh Ngoc

    2015-12-01

    In this paper, a simple and robust algorithm is proposed for iris segmentation. The proposed method consists of two steps. In first step, iris and pupil is segmented using Robust Spatial Kernel FCM (RSKFCM) algorithm. RSKFCM is based on traditional Fuzzy-c-Means (FCM) algorithm, which incorporates spatial information and uses kernel metric as distance measure. In second step, small eigenvalue transformation is applied to localize iris boundary. The transformation is based on statistical and geometrical properties of the small eigenvalue of the covariance matrix of a set of edge pixels. Extensive experimentations are carried out on standard benchmark iris dataset (viz. CASIA-IrisV4 and UBIRIS.v2). We compared our proposed method with existing iris segmentation methods. Our proposed method has the least time complexity of O(n(i+p)) . The result of the experiments emphasizes that the proposed algorithm outperforms the existing iris segmentation methods.

  6. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  7. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    NASA Astrophysics Data System (ADS)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  8. Algorithms For Segmentation Of Complex-Amplitude SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Chellappa, Ramalingam

    1993-01-01

    Several algorithms implement improved method of segmenting highly speckled, high-resolution, complex-amplitude synthetic-aperture-radar (SAR) digitized images into regions, within each backscattering characteristics similar or homogeneous from place to place. Method provides for approximate, deterministic solution by two alternative algorithms almost always converging to local minimums: one, Iterative Conditional Modes (ICM) algorithm, which locally maximizes posterior probability density of region labels; other, Maximum Posterior Marginal (MPM) algorithm, which maximizes posterior marginal density of region labels at each pixel location. ICM algorithm optimizes reconstruction of underlying scene. MPM algorithm minimizes expected number of misclassified pixels, possibly better in remote sensing of natural scenes.

  9. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  10. Quantitative comparison of the performance of SAR segmentation algorithms.

    PubMed

    Caves, R; Quegan, S; White, R

    1998-01-01

    Methods to evaluate the performance of segmentation algorithms for synthetic aperture radar (SAR) images are developed, based on known properties of coherent speckle and a scene model in which areas of constant backscatter coefficient are separated by abrupt edges. Local and global measures of segmentation homogeneity are derived and applied to the outputs of two segmentation algorithms developed for SAR data, one based on iterative edge detection and segment growing, the other based on global maximum a posteriori (MAP) estimation using simulated annealing. The quantitative statistically based measures appear consistent with visual impressions of the relative quality of the segmentations produced by the two algorithms. On simulated data meeting algorithm assumptions, both algorithms performed well but MAP methods appeared visually and measurably better. On real data, MAP estimation was markedly the better method and retained performance comparable to that on simulated data, while the performance of the other algorithm deteriorated sharply. Improvements in the performance measures will require a more realistic scene model and techniques to recognize oversegmentation. PMID:18276219

  11. Automatic segmentation of intra-cochlear anatomy in post-implantation CT of unilateral cochlear implant recipients.

    PubMed

    Reda, Fitsum A; McRackan, Theodore R; Labadie, Robert F; Dawant, Benoit M; Noble, Jack H

    2014-04-01

    A cochlear implant (CI) is a neural prosthetic device that restores hearing by directly stimulating the auditory nerve using an electrode array that is implanted in the cochlea. In CI surgery, the surgeon accesses the cochlea and makes an opening where he/she inserts the electrode array blind to internal structures of the cochlea. Because of this, the final position of the electrode array relative to intra-cochlear anatomy is generally unknown. We have recently developed an approach for determining electrode array position relative to intra-cochlear anatomy using a pre- and a post-implantation CT. The approach is to segment the intra-cochlear anatomy in the pre-implantation CT, localize the electrodes in the post-implantation CT, and register the two CTs to determine relative electrode array position information. Currently, we are using this approach to develop a CI programming technique that uses patient-specific spatial information to create patient-customized sound processing strategies. However, this technique cannot be used for many CI users because it requires a pre-implantation CT that is not always acquired prior to implantation. In this study, we propose a method for automatic segmentation of intra-cochlear anatomy in post-implantation CT of unilateral recipients, thus eliminating the need for pre-implantation CTs in this population. The method is to segment the intra-cochlear anatomy in the implanted ear using information extracted from the normal contralateral ear and to exploit the intra-subject symmetry in cochlear anatomy across ears. To validate our method, we performed experiments on 30 ears for which both a pre- and a post-implantation CT are available. The mean and the maximum segmentation errors are 0.224 and 0.734mm, respectively. These results indicate that our automatic segmentation method is accurate enough for developing patient-customized CI sound processing strategies for unilateral CI recipients using a post-implantation CT alone. PMID

  12. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. PMID:25233873

  13. Modeling and segmentation of intra-cochlear anatomy in conventional CT

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Rutherford, Robert B.; Labadie, Robert F.; Majdani, Omid; Dawant, Benoit M.

    2010-03-01

    Cochlear implant surgery is a procedure performed to treat profound hearing loss. Since the cochlea is not visible in surgery, the physician uses anatomical landmarks to estimate the pose of the cochlea. Research has indicated that implanting the electrode in a particular cavity of the cochlea, the scala tympani, results in better hearing restoration. The success of the scala tympani implantation is largely dependent on the point of entry and angle of electrode insertion. Errors can occur due to the imprecise nature of landmark-based, manual navigation as well as inter-patient variations between scala tympani and the anatomical landmarks. In this work, we use point distribution models of the intra-cochlear anatomy to study the inter-patient variations between the cochlea and the typical anatomic landmarks, and we implement an active shape model technique to automatically localize intra-cochlear anatomy in conventional CT images, where intra-cochlear structures are not visible. This fully automatic segmentation could aid the surgeon to choose the point of entry and angle of approach to maximize the likelihood of scala tympani insertion, resulting in more substantial hearing restoration.

  14. Towards an automatic coronary artery segmentation algorithm.

    PubMed

    Fallavollita, Pascal; Cheriet, Farida

    2006-01-01

    A method is presented that aims at minimizing image processing time during X-ray fluoroscopy interventions. First, an automatic frame extraction algorithm is proposed in order to extract relevant image frames with respect to their cardiac phase (systole or diastole). Secondly, a 4-step filter is suggested in order to enhance vessel contours. The reciprocal of the enhanced image is used as an alternative speed function to initialize the fast marching method. The complete algorithm was tested on eight clinical angiographic data sets and comparisons with two other vessel enhancement filters (Lorenz and Frangi) are made for the centerline extraction procedure. In order to assess the suitability of our filter the extracted centerline coordinates are compared with the manually traced axis. PMID:17946540

  15. A segmentation algorithm of intracranial hemorrhage CT image

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Chen, Zhiguo; Wang, Jianzhi

    2011-10-01

    To develop a computer aided detection (CAD) system that improves diagnostic accuracy of intracranial hemorrhage on cerebral CT. A method for CT image segmentation of brain is proposed, with which, several regions that are suspicious of hemorrhage can be segmented rapidly and effectively. Extracting intracranial area algorithm is introduced firstly to extract intracranial area. Secondly, FCM is employed twice, we named it with TFCM. FCM is first employed to identify areas of intracranial hemorrhage. Finally, FCM is employed to segment the lesions. Experimental results on real medical images demonstrate the efficiency and effectiveness.

  16. Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.

  17. Sensitivity field distributions for segmental bioelectrical impedance analysis based on real human anatomy

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Kramarenko, V. K.; Nikolaev, D. V.; Rudnev, S. G.; Salamatova, V. Yu; Smirnov, A. V.; Vassilevski, Yu V.

    2013-04-01

    In this work, an adaptive unstructured tetrahedral mesh generation technology is applied for simulation of segmental bioimpedance measurements using high-resolution whole-body model of the Visible Human Project man. Sensitivity field distributions for a conventional tetrapolar, as well as eight- and ten-electrode measurement configurations are obtained. Based on the ten-electrode configuration, we suggest an algorithm for monitoring changes in the upper lung area.

  18. Split Bregman's algorithm for three-dimensional mesh segmentation

    NASA Astrophysics Data System (ADS)

    Habiba, Nabi; Ali, Douik

    2016-05-01

    Variational methods have attracted a lot of attention in the literature, especially for image and mesh segmentation. The methods aim at minimizing the energy to optimize both edge and region detections. We propose a spectral mesh decomposition algorithm to obtain disjoint but meaningful regions of an input mesh. The related optimization problem is nonconvex, and it is very difficult to find a good approximation or global optimum, which represents a challenge in computer vision. We propose an alternating split Bregman algorithm for mesh segmentation, where we extended the image-dedicated model to a three-dimensional (3-D) mesh one. By applying our scheme to 3-D mesh segmentation, we obtain fast solvers that can outperform various conventional ones, such as graph-cut and primal dual methods. A consistent evaluation of the proposed method on various public domain 3-D databases for different metrics is elaborated, and a comparison with the state-of-the-art is performed.

  19. Evaluation of a segmentation algorithm designed for an FPGA implementation

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Schönermark, Maria; Huber, Felix

    2013-10-01

    The present work has to be seen in the context of real-time on-board image evaluation of optical satellite data. With on board image evaluation more useful data can be acquired, the time to get requested information can be decreased and new real-time applications are possible. Because of the relative high processing power in comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is image segmentation. It is a basic tool to extract spatial image information which is very important for many applications such as object detection. Therefore a special segmentation algorithm using the advantages of FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of the quality assessment will be presented.

  20. A hybrid lung and vessel segmentation algorithm for computer aided detection of pulmonary embolism

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Lakare, Sarang

    2009-02-01

    Advances in multi-detector technology have made CT pulmonary angiography (CTPA) a popular radiological tool for pulmonary emboli (PE) detection. CTPA provide rich detail of lung anatomy and is a useful diagnostic aid in highlighting even very small PE. However analyzing hundreds of slices is laborious and time-consuming for the practicing radiologist which may also cause misdiagnosis due to the presence of various PE look-alike. Computer-aided diagnosis (CAD) can be a potential second reader in providing key diagnostic information. Since PE occurs only in vessel arteries, it is important to mark this region of interest (ROI) during CAD preprocessing. In this paper, we present a new lung and vessel segmentation algorithm for extracting contrast-enhanced vessel ROI in CTPA. Existing approaches to segmentation either provide only the larger lung area without highlighting the vessels or is computationally prohibitive. In this paper, we propose a hybrid lung and vessel segmentation which uses an initial lung ROI and determines the vessels through a series of refinement steps. We first identify a coarse vessel ROI by finding the "holes" from the lung ROI. We then use the initial ROI as seed-points for a region-growing process while carefully excluding regions which are not relevant. The vessel segmentation mask covers 99% of the 259 PE from a real-world set of 107 CTPA. Further, our algorithm increases the net sensitivity of a prototype CAD system by 5-9% across all PE categories in the training and validation data sets. The average run-time of algorithm was only 100 seconds on a standard workstation.

  1. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    NASA Astrophysics Data System (ADS)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  2. Magnetic resonance segmentation with the bubble wave algorithm

    NASA Astrophysics Data System (ADS)

    Cline, Harvey E.; Ludke, Siegwalt

    2003-05-01

    A new bubble wave algorithm provides automatic segmentation of three-dimensional magnetic resonance images of both the peripheral vasculature and the brain. Simple connectivity algorithms are not reliable in these medical applications because there are unwanted connections through background noise. The bubble wave algorithm restricts connectivity using curvature by testing spherical regions on a propagating active contour to eliminate noise bridges. After the user places seeds in both the selected regions and in the regions that are not desired, the method provides the critical threshold for segmentation using binary search. Today, peripheral vascular disease is diagnosed using magnetic resonance imaging with a timed contrast bolus. A new blood pool contrast agent MS-325 (Epix Medical) binds to albumen in the blood and provides high-resolution three-dimensional images of both arteries and veins. The bubble wave algorithm provides a means to automatically suppress the veins that obscure the arteries in magnetic resonance angiography. Monitoring brain atrophy is needed for trials of drugs that retard the progression of dementia. The brain volume is measured by placing seeds in both the brain and scalp to find the critical threshold that prevents connections between the brain volume and the scalp. Examples from both three-dimensional magnetic resonance brain and contrast enhanced vascular images were segmented with minimal user intervention.

  3. Joint graph cut and relative fuzzy connectedness image segmentation algorithm.

    PubMed

    Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K

    2013-12-01

    We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. PMID:23880374

  4. An improved watershed image segmentation algorithm combining with a new entropy evaluation criterion

    NASA Astrophysics Data System (ADS)

    Deng, Tingquan; Li, Yanchao

    2013-03-01

    An improved watershed image segmentation algorithm is proposed to solve the problem of over-segmentation by classical watershed algorithm. The new algorithm combines region growing with classical watershed algorithm. The key to region growing lies in choosing a growing threshold to reach a desired result of image segmentation. An entropy evaluation criterion is constructed to determine the optimal threshold. Considering the entropy evaluation criterion as an objective function, the particle swarm optimization algorithm is employed to search global optimization of the objective function. Experimental results show that this new algorithm can solve the problem of over-segmentation effectively.

  5. Sinus Anatomy

    MedlinePlus

    ... ARS HOME ANATOMY Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure ... ANATOMY > Sinus Anatomy Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure ...

  6. Crowdsourcing the creation of image segmentation algorithms for connectomics

    PubMed Central

    Arganda-Carreras, Ignacio; Turaga, Srinivas C.; Berger, Daniel R.; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M.; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M.; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D.; Bas, Erhan; Uzunbas, Mustafa G.; Cardona, Albert; Schindelin, Johannes; Seung, H. Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge. PMID:26594156

  7. Crowdsourcing the creation of image segmentation algorithms for connectomics.

    PubMed

    Arganda-Carreras, Ignacio; Turaga, Srinivas C; Berger, Daniel R; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D; Bas, Erhan; Uzunbas, Mustafa G; Cardona, Albert; Schindelin, Johannes; Seung, H Sebastian

    2015-01-01

    To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge. PMID:26594156

  8. Bladder segmentation in MR images with watershed segmentation and graph cut algorithm

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Renisch, Steffen; Schadewaldt, Nicole; Schulz, Heinrich; Wiemker, Rafael

    2014-03-01

    Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.

  9. [The influence of segmental lumbosacral anatomy restoration on clinical outomes in the operative treatment of the isthmic spondylolisthesis].

    PubMed

    Pankowski, Rafał; Smoczyński, Andrzej; Jaskólski, Dawid; Rocławski, Marek; Samson, Lucjan; Piotrowski, Maciej

    2008-01-01

    The influence of lumbosacral spine segmental anatomy restoration on the outcome of the operative treatment of isthmic spondylolisthesis was taken into evaluation. A series of 55 patients (29 males and 26 females) was examined. The long-term follow up period exceeded 3 years. The Oswestry Disability Questionaire was used to evaluate the objective clinical condition of the patients, while for the subjective assessment an analog pain score and the two questions survey concerning the evaluation of success of the operative treatment and a possible agreement to a following operation if necessary were used. The presence of neurological radical symptoms was evaluated. The radiological assessment was consisted of the evaluation of the degree of spondylolisthesis, the angle of lumbosacral lordosis, the height of the interbody space and intervertebral foramen. In conclusions, the proper spine anatomy restoration had the influence on the improvement of the outcome of operative treatment of isthmic spondylolisthesis. A metal cage usage for the anterior interbody fusion of lumbar spine in the operative treatment of isthmic spondylolisthesis enables long-lasting proper anatomical relations of the fused segment. PMID:19241885

  10. Level set algorithms comparison for multi-slice CT left ventricle segmentation

    NASA Astrophysics Data System (ADS)

    Medina, Ruben; La Cruz, Alexandra; Ordoñes, Andrés.; Pesántez, Daniel; Morocho, Villie; Vanegas, Pablo

    2015-12-01

    The comparison of several Level Set algorithms is performed with respect to 2D left ventricle segmentation in Multi-Slice CT images. Five algorithms are compared by calculating the Dice coefficient between the resulting segmentation contour and a reference contour traced by a cardiologist. The algorithms are also tested on images contaminated with Gaussian noise for several values of PSNR. Additionally an algorithm for providing the initialization shape is proposed. This algorithm is based on a combination of mathematical morphology tools with watershed and region growing algorithms. Results on the set of test images are promising and suggest the extension to 3{D MSCT database segmentation.

  11. Breast Density Analysis Using an Automatic Density Segmentation Algorithm.

    PubMed

    Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert

    2015-10-01

    Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ρ = 0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ρ = 0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749

  12. Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm

    PubMed Central

    2015-01-01

    Background Organ segmentation is an important step in computer-aided diagnosis and pathology detection. Accurate kidney segmentation in abdominal computed tomography (CT) sequences is an essential and crucial task for surgical planning and navigation in kidney tumor ablation. However, kidney segmentation in CT is a substantially challenging work because the intensity values of kidney parenchyma are similar to those of adjacent structures. Results In this paper, a coarse-to-fine method was applied to segment kidney from CT images, which consists two stages including rough segmentation and refined segmentation. The rough segmentation is based on a kernel fuzzy C-means algorithm with spatial information (SKFCM) algorithm and the refined segmentation is implemented with improved GrowCut (IGC) algorithm. The SKFCM algorithm introduces a kernel function and spatial constraint into fuzzy c-means clustering (FCM) algorithm. The IGC algorithm makes good use of the continuity of CT sequences in space which can automatically generate the seed labels and improve the efficiency of segmentation. The experimental results performed on the whole dataset of abdominal CT images have shown that the proposed method is accurate and efficient. The method provides a sensitivity of 95.46% with specificity of 99.82% and performs better than other related methods. Conclusions Our method achieves high accuracy in kidney segmentation and considerably reduces the time and labor required for contour delineation. In addition, the method can be expanded to 3D segmentation directly without modification. PMID:26356850

  13. [Fast segmentation algorithm of high resolution remote sensing image based on multiscale mean shift].

    PubMed

    Wang, Lei-Guang; Zheng, Chen; Lin, Li-Yu; Chen, Rong-Yuan; Mei, Tian-Can

    2011-01-01

    Mean Shift algorithm is a robust approach toward feature space analysis and it has been used wildly for natural scene image and medical image segmentation. However, high computational complexity of the algorithm has constrained its application in remote sensing images with massive information. A fast image segmentation algorithm is presented by extending traditional mean shift method to wavelet domain. In order to evaluate the effectiveness of the proposed algorithm, multispectral remote sensing image and synthetic image are utilized. The results show that the proposed algorithm can improve the speed 5-7 times compared to the traditional MS method in the premise of segmentation quality assurance. PMID:21428083

  14. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  15. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  16. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  17. Anatomy of the ostia venae hepaticae and the retrohepatic segment of the inferior vena cava.

    PubMed Central

    Camargo, A M; Teixeira, G G; Ortale, J R

    1996-01-01

    In 30 normal adult livers the retrohepatic segment of inferior vena cava had a length of 6.7 cm and was totally encircled by liver substance in 30% of cases. Altogether 442 ostia venae hepaticae were found, averaging 14.7 per liver and classified as large, medium, small and minimum. The localisation of the openings was studied according to the division of the wall of the retrohepatic segment of the inferior vena cava into 16 areas. PMID:8655416

  18. Anatomy of the ostia venae hepaticae and the retrohepatic segment of the inferior vena cava.

    PubMed

    Camargo, A M; Teixeira, G G; Ortale, J R

    1996-02-01

    In 30 normal adult livers the retrohepatic segment of inferior vena cava had a length of 6.7 cm and was totally encircled by liver substance in 30% of cases. Altogether 442 ostia venae hepaticae were found, averaging 14.7 per liver and classified as large, medium, small and minimum. The localisation of the openings was studied according to the division of the wall of the retrohepatic segment of the inferior vena cava into 16 areas. PMID:8655416

  19. A novel segmentation-based algorithm for the quantification of magnified cells.

    PubMed

    Thompson, Gemma C; Ireland, Timothy A; Larkin, Xanthe E; Larkin, Xanthe C; Arnold, Jonathon; Holsinger, R M Damian

    2014-11-01

    Cell segmentation and counting is often required in disciplines such as biological research and medical diagnosis. Manual counting, although still employed, suffers from being time consuming and sometimes unreliable. As a result, several automated cell segmentation and counting methods have been developed. A main component of automated cell counting algorithms is the image segmentation technique employed. Several such techniques were investigated and implemented in the present study. The segmentation and counting was performed on antibody stained brain tissue sections that were magnified by a factor of 40. Commonly used methods such as the circular Hough transform and watershed segmentation were analysed. These tests were found to over-segment and therefore over-count samples. Consequently, a novel cell segmentation and counting algorithm was developed and employed. The algorithm was found to be in almost perfect agreement with the average of four manual counters, with an intraclass correlation coefficient (ICC) of 0.8. PMID:25043374

  20. Wound size measurement of lower extremity ulcers using segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Dadkhah, Arash; Pang, Xing; Solis, Elizabeth; Fang, Ruogu; Godavarty, Anuradha

    2016-03-01

    Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.

  1. Nasal Anatomy

    MedlinePlus

    ... Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure Statement CONDITIONS Adult ... Nasal Anatomy Sinus Anatomy Nasal Physiology Nasal Endoscopy Skull Base Anatomy Virtual Anatomy Disclosure Statement Printer Friendly ...

  2. A Logarithmic Opinion Pool Based STAPLE Algorithm For The Fusion of Segmentations With Associated Reliability Weights

    PubMed Central

    Akhondi-Asl, Alireza; Hoyte, Lennox; Lockhart, Mark E.; Warfield, Simon K.

    2014-01-01

    Pelvic floor dysfunction is very common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of the structures of pelvic floor, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising techniques for segmentation of MRI in these types of applications, but these algorithms have been limited by imperfections in the alignment of each template to the target, and by template segmentation errors. In this class of segmentation techniques, a collection of templates is aligned to a target, and a new segmentation of the target is inferred. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out decision fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. The algorithm is a generalization of the STAPLE algorithm in which a reference segmentation is estimated and used to infer an optimal weighting for fusion of templates. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison

  3. Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science

    PubMed Central

    Egger, Jan

    2014-01-01

    In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650

  4. LACK OF PROTEIN-TYROSINE SULFATION DISRUPTS PHOTORECEPTOR OUTER SEGMENT MORPHOGENESIS, RETINAL FUNCTION AND RETINAL ANATOMY

    PubMed Central

    Sherry, David M.; Murray, Anne R.; Kanan, Yogita; Arbogast, Kelsey L.; Hamilton, Robert A.; Fliesler, Steven J.; Burns, Marie E.; Moore, Kevin L.; Al-Ubaidi, Muayyad R.

    2010-01-01

    To investigate the role(s) of protein-tyrosine sulfation in the retina, we examined retinal function and structure in mice lacking tyrosylprotein sulfotransferases (TPST) 1 and 2. Tpst double knockout (DKO; Tpst1−/−/Tpst2−/−) retinas had drastically reduced electroretinographic responses, although their photoreceptors exhibited normal responses in single cell recordings. These retinas appeared normal histologically; however, the rod photoreceptors had ultrastructurally abnormal outer segments, with membrane evulsions into the extracellular space, irregular disc membrane spacing, and expanded intradiscal space. Photoreceptor synaptic terminals were disorganized in Tpst DKO retinas, but established ultrastructurally normal synapses, as did bipolar and amacrine cells; however, the morphology and organization of neuronal processes in the inner retina were abnormal. These results indicate that protein-tyrosine sulfation is essential for proper outer segment morphogenesis and synaptic function, but is not critical for overall retinal structure or synapse formation, and may serve broader functions in neuronal development and maintenance. PMID:21039965

  5. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  6. Feedback algorithm for simulation of multi-segmented cracks

    SciTech Connect

    Chady, T.; Napierala, L.

    2011-06-23

    In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.

  7. Automated segmentation and reconstruction of patient-specific cardiac anatomy and pathology from in vivo MRI*

    NASA Astrophysics Data System (ADS)

    Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey

    2012-12-01

    This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.

  8. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  9. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  10. A Class Of Iterative Thresholding Algorithms For Real-Time Image Segmentation

    NASA Astrophysics Data System (ADS)

    Hassan, M. H.

    1989-03-01

    Thresholding algorithms are developed for segmenting gray-level images under nonuniform illumination. The algorithms are based on learning models generated from recursive digital filters which yield to continuously varying threshold tracking functions. A real-time region growing algorithm, which locates the objects in the image while thresholding, is developed and implemented. The algorithms work in a raster-scan format, thus making them attractive for real-time image segmentation in situations requiring fast data throughput such as robot vision and character recognition.

  11. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  12. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    NASA Astrophysics Data System (ADS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  13. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method. PMID:20703716

  14. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  15. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study.

    PubMed

    Polan, Daniel F; Brady, Samuel L; Kaufman, Robert A

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 (n) , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  16. LoAd: a locally adaptive cortical segmentation algorithm.

    PubMed

    Cardoso, M Jorge; Clarkson, Matthew J; Ridgway, Gerard R; Modat, Marc; Fox, Nick C; Ourselin, Sebastien

    2011-06-01

    Thickness measurements of the cerebral cortex can aid diagnosis and provide valuable information about the temporal evolution of diseases such as Alzheimer's, Huntington's, and schizophrenia. Methods that measure the thickness of the cerebral cortex from in-vivo magnetic resonance (MR) images rely on an accurate segmentation of the MR data. However, segmenting the cortex in a robust and accurate way still poses a challenge due to the presence of noise, intensity non-uniformity, partial volume effects, the limited resolution of MRI and the highly convoluted shape of the cortical folds. Beginning with a well-established probabilistic segmentation model with anatomical tissue priors, we propose three post-processing refinements: a novel modification of the prior information to reduce segmentation bias; introduction of explicit partial volume classes; and a locally varying MRF-based model for enhancement of sulci and gyri. Experiments performed on a new digital phantom, on BrainWeb data and on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) show statistically significant improvements in Dice scores and PV estimation (p<10(-3)) and also increased thickness estimation accuracy when compared to three well established techniques. PMID:21316470

  17. LoAd: A locally adaptive cortical segmentation algorithm

    PubMed Central

    Cardoso, M. Jorge; Clarkson, Matthew J.; Ridgway, Gerard R.; Modat, Marc; Fox, Nick C.; Ourselin, Sebastien

    2012-01-01

    Thickness measurements of the cerebral cortex can aid diagnosis and provide valuable information about the temporal evolution of diseases such as Alzheimer's, Huntington's, and schizophrenia. Methods that measure the thickness of the cerebral cortex from in-vivo magnetic resonance (MR) images rely on an accurate segmentation of the MR data. However, segmenting the cortex in a robust and accurate way still poses a challenge due to the presence of noise, intensity non-uniformity, partial volume effects, the limited resolution of MRI and the highly convoluted shape of the cortical folds. Beginning with a well-established probabilistic segmentation model with anatomical tissue priors, we propose three post-processing refinements: a novel modification of the prior information to reduce segmentation bias; introduction of explicit partial volume classes; and a locally varying MRF-based model for enhancement of sulci and gyri. Experiments performed on a new digital phantom, on BrainWeb data and on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) show statistically significant improvements in Dice scores and PV estimation (p<10−3) and also increased thickness estimation accuracy when compared to three well established techniques. PMID:21316470

  18. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  19. Coupling Regular Tessellation with Rjmcmc Algorithm to Segment SAR Image with Unknown Number of Classes

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, Y.; Zhao, Q. H.

    2016-06-01

    This paper presents a Synthetic Aperture Radar (SAR) image segmentation approach with unknown number of classes, which is based on regular tessellation and Reversible Jump Markov Chain Monte Carlo (RJMCMC') algorithm. First of all, an image domain is portioned into a set of blocks by regular tessellation. The image is modeled on the assumption that intensities of its pixels in each homogeneous region satisfy an identical and independent Gamma distribution. By Bayesian paradigm, the posterior distribution is obtained to build the region-based image segmentation model. Then, a RJMCMC algorithm is designed to simulate from the segmentation model to determine the number of homogeneous regions and segment the image. In order to further improve the segmentation accuracy, a refined operation is performed. To illustrate the feasibility and effectiveness of the proposed approach, two real SAR image is tested.

  20. Optree: a learning-based adaptive watershed algorithm for neuron segmentation.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Metaxas, Dimitris

    2014-01-01

    We present a new algorithm for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. Our method selects a collection of nodes from the watershed mergng tree as the proposed segmentation. This is achieved by building a onditional random field (CRF) whose underlying graph is the merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our algorithm outperforms state-of-the-art methods. Both the inference and the training are very efficient as the graph is tree-structured. Furthermore, we develop an interactive segmentation framework which selects uncertain regions for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally. PMID:25333106

  1. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  2. Ear feature region detection based on a combined image segmentation algorithm-KRM

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Zhang, Hao; Zhang, Qi; Lu, Junsheng; Ma, Zhenhe; Xu, Kexin

    2014-02-01

    Scale Invariant Feature Transform SIFT algorithm is widely used for ear feature matching and recognition. However, the application of the algorithm is usually interfered by the non-target areas within the whole image, and the interference would then affect the matching and recognition of ear features. To solve this problem, a combined image segmentation algorithm i.e. KRM was introduced in this paper, As the human ear recognition pretreatment method. Firstly, the target areas of ears were extracted by the KRM algorithm and then SIFT algorithm could be applied to the detection and matching of features. The present KRM algorithm follows three steps: (1)the image was preliminarily segmented into foreground target area and background area by using K-means clustering algorithm; (2)Region growing method was used to merge the over-segmented areas; (3)Morphology erosion filtering method was applied to obtain the final segmented regions. The experiment results showed that the KRM method could effectively improve the accuracy and robustness of ear feature matching and recognition based on SIFT algorithm.

  3. Color tongue image segmentation using fuzzy Kohonen networks and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Aimin; Shen, Lansun; Zhao, Zhongxu

    2000-04-01

    A Tongue Imaging and Analysis System is being developed to acquire digital color tongue images, and to automatically classify and quantify the tongue characteristics for traditional Chinese medical examinations. An important processing step is to segment the tongue pixels into two categories, the tongue body (no coating) and the coating. In this paper, we present a two-stage clustering algorithm that combines Fuzzy Kohonen Clustering Networks and Genetic Algorithm for the segmentation, of which the major concern is to increase the interclass distance and at the same time decrease the intraclass distance. Experimental results confirm the effectiveness of this algorithm.

  4. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  5. Efficient Algorithms for Analyzing Segmental Duplications, Deletions, and Inversions in Genomes

    NASA Astrophysics Data System (ADS)

    Kahn, Crystal L.; Mozes, Shay; Raphael, Benjamin J.

    Segmental duplications, or low-copy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics consisting of pieces of multiple other segmental duplications. This complex genomic organization complicates analysis of the evolutionary history of these sequences. Earlier, we introduced a genomic distance, called duplication distance, that computes the most parsimonious way to build a target string by repeatedly copying substrings of a source string. We also showed how to use this distance to describe the formation of segmental duplications according to a two-step model that has been proposed to explain human segmental duplications. Here we describe polynomial-time exact algorithms for several extensions of duplication distance including models that allow certain types of substring deletions and inversions. These extensions will permit more biologically realistic analyses of segmental duplications in genomes.

  6. A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study.

    PubMed

    Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy

    2016-08-01

    Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies. PMID:26847203

  7. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  8. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  9. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    NASA Technical Reports Server (NTRS)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  10. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-01-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of

  11. Fuzzy C-Means Algorithm for Segmentation of Aerial Photography Data Obtained Using Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Akinin, M. V.; Akinina, N. V.; Klochkov, A. Y.; Nikiforov, M. B.; Sokolova, A. V.

    2015-05-01

    The report reviewed the algorithm fuzzy c-means, performs image segmentation, give an estimate of the quality of his work on the criterion of Xie-Beni, contain the results of experimental studies of the algorithm in the context of solving the problem of drawing up detailed two-dimensional maps with the use of unmanned aerial vehicles. According to the results of the experiment concluded that the possibility of applying the algorithm in problems of decoding images obtained as a result of aerial photography. The considered algorithm can significantly break the original image into a plurality of segments (clusters) in a relatively short period of time, which is achieved by modification of the original k-means algorithm to work in a fuzzy task.

  12. NASARC - NUMERICAL ARC SEGMENTATION ALGORITHM FOR A RADIO CONFERENCE

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.

    1994-01-01

    NASARC was developed from the general planning principles and decisions of both sessions of the World Administrative Radio Conference on the Use of the Geostationary Satellite Orbit and on the Planning of Space Services Utilizing It (WARC-85, WARC-88). NASARC was written to help countries satisfy requirements for nation-wide Fixed Satellite services from at least one orbital position within a predetermined arc. The NASARC-generated predetermined arcs are each based on a common arc segment visible to a group of compatible service areas, and provide a means of generating a highly flexible allotment plan with a reduced need for coordination among administrations. The selection of particular groupings of service areas and their associated predetermined arcs is made according to a heuristic approach using several figures of merit designed to confront the most difficult allotment problems. NASARC attempts to select groupings and predetermined arc sizes so that the requirements of all administrations are met before the available orbital arc is exhausted. The predetermined arcs allow considerable freedom of choice in the positioning of space stations for all members of any grouping. The approach to allotment planning for which NASARC was designed consists of two phases. The first is the use of NASARC to identify predetermined arc segments common to groups of administrations. Those administrations within a group and sharing a common predetermined arc segment would be able to position their individual space stations at any one of a number of orbital positions within the predetermined arc. The second phase involves the use of a plan synthesis program (such as the ORBIT program resident at the International Frequency Registration Board in Geneva, Switzerland) to identify example scenarios of specific space station placements. NASARC software is modular, and consists of several programs to be run in sequence. The grouping module, NASARC1, identifies compatible groups of several

  13. Side scan sonar image segmentation based on neutrosophic set and quantum-behaved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhu; Wang, Xiao; Zhang, Hongmei; Hu, Jun; Jian, Xiaomin

    2016-06-01

    To fulfill side scan sonar (SSS) image segmentation accurately and efficiently, a novel segmentation algorithm based on neutrosophic set (NS) and quantum-behaved particle swarm optimization (QPSO) is proposed in this paper. Firstly, the neutrosophic subset images are obtained by transforming the input image into the NS domain. Then, a co-occurrence matrix is accurately constructed based on these subset images, and the entropy of the gray level image is described to serve as the fitness function of the QPSO algorithm. Moreover, the optimal two-dimensional segmentation threshold vector is quickly obtained by QPSO. Finally, the contours of the interested target are segmented with the threshold vector and extracted by the mathematic morphology operation. To further improve the segmentation efficiency, the single threshold segmentation, an alternative algorithm, is recommended for the shadow segmentation by considering the gray level characteristics of the shadow. The accuracy and efficiency of the proposed algorithm are assessed with experiments of SSS image segmentation.

  14. Fuzzy Control Hardware for Segmented Mirror Phasing Algorithm

    NASA Technical Reports Server (NTRS)

    Roth, Elizabeth

    1999-01-01

    This paper presents a possible implementation of a control model developed to phase a system of segmented mirrors, with a PAMELA configuration, using analog fuzzy hardware. Presently, the model is designed for piston control only, but with the foresight that the parameters of tip and tilt will be integrated eventually. The proposed controller uses analog circuits to exhibit a voltage-mode singleton fuzzifier, a mixed-mode inference engine, and a current-mode defuzzifier. The inference engine exhibits multiplication circuits that perform the algebraic product composition through the use of operational transconductance amplifiers rather than the typical min-max circuits. Additionally, the knowledge base, containing exemplar data gained a priori through simulation, interacts via a digital interface.

  15. A graph-based segmentation algorithm for tree crown extraction using airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Strîmbu, Victor F.; Strîmbu, Bogdan M.

    2015-06-01

    This work proposes a segmentation method that isolates individual tree crowns using airborne LiDAR data. The proposed approach captures the topological structure of the forest in hierarchical data structures, quantifies topological relationships of tree crown components in a weighted graph, and finally partitions the graph to separate individual tree crowns. This novel bottom-up segmentation strategy is based on several quantifiable cohesion criteria that act as a measure of belief on weather two crown components belong to the same tree. An added flexibility is provided by a set of weights that balance the contribution of each criterion, thus effectively allowing the algorithm to adjust to different forest structures. The LiDAR data used for testing was acquired in Louisiana, inside the Clear Creek Wildlife management area with a RIEGL LMS-Q680i airborne laser scanner. Three 1 ha forest areas of different conditions and increasing complexity were segmented and assessed in terms of an accuracy index (AI) accounting for both omission and commission. The three areas were segmented under optimum parameterization with an AI of 98.98%, 92.25% and 74.75% respectively, revealing the excellent potential of the algorithm. When segmentation parameters are optimized locally using plot references the AI drops to 98.23%, 89.24%, and 68.04% on average with plot sizes of 1000 m2 and 97.68%, 87.78% and 61.1% on average with plot sizes of 500 m2. More than introducing a segmentation algorithm, this paper proposes a powerful framework featuring flexibility to support a series of segmentation methods including some of those recurring in the tree segmentation literature. The segmentation method may extend its applications to any data of topological nature or data that has a topological equivalent.

  16. Prostate segmentation algorithm using dyadic wavelet transform and discrete dynamic contour

    NASA Astrophysics Data System (ADS)

    Chiu, Bernard; Freeman, George H.; Salama, M. M. A.; Fenster, Aaron

    2004-11-01

    Knowing the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy, a commonly used prostate cancer treatment method. The prostate boundary must be segmented before a dose plan can be obtained. However, manual segmentation is arduous and time consuming. This paper introduces a semi-automatic segmentation algorithm based on the dyadic wavelet transform (DWT) and the discrete dynamic contour (DDC). A spline interpolation method is used to determine the initial contour based on four user-defined initial points. The DDC model then refines the initial contour based on the approximate coefficients and the wavelet coefficients generated using the DWT. The DDC model is executed under two settings. The coefficients used in these two settings are derived using smoothing functions with different sizes. A selection rule is used to choose the best contour based on the contours produced in these two settings. The accuracy of the final contour produced by the proposed algorithm is evaluated by comparing it with the manual contour outlined by an expert observer. A total of 114 2D TRUS images taken for six different patients scheduled for brachytherapy were segmented using the proposed algorithm. The average difference between the contour segmented using the proposed algorithm and the manually outlined contour is less than 3 pixels.

  17. Prostate segmentation algorithm using dyadic wavelet transform and discrete dynamic contour.

    PubMed

    Chiu, Bernard; Freeman, George H; Salama, M M A; Fenster, Aaron

    2004-11-01

    Knowing the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy, a commonly used prostate cancer treatment method. The prostate boundary must be segmented before a dose plan can be obtained. However, manual segmentation is arduous and time consuming. This paper introduces a semi-automatic segmentation algorithm based on the dyadic wavelet transform (DWT) and the discrete dynamic contour (DDC). A spline interpolation method is used to determine the initial contour based on four user-defined initial points. The DDC model then refines the initial contour based on the approximate coefficients and the wavelet coefficients generated using the DWT. The DDC model is executed under two settings. The coefficients used in these two settings are derived using smoothing functions with different sizes. A selection rule is used to choose the best contour based on the contours produced in these two settings. The accuracy of the final contour produced by the proposed algorithm is evaluated by comparing it with the manual contour outlined by an expert observer. A total of 114 2D TRUS images taken for six different patients scheduled for brachytherapy were segmented using the proposed algorithm. The average difference between the contour segmented using the proposed algorithm and the manually outlined contour is less than 3 pixels. PMID:15584529

  18. An improved vein image segmentation algorithm based on SLIC and Niblack threshold method

    NASA Astrophysics Data System (ADS)

    Zhou, Muqing; Wu, Zhaoguo; Chen, Difan; Zhou, Ya

    2013-12-01

    Subcutaneous vein images are often obtained by using the absorbency difference of near-infrared (NIR) light between vein and its surrounding tissue under NIR light illumination. Vein images with high quality are critical to biometric identification, which requires segmenting the vein skeleton from the original images accurately. To address this issue, we proposed a vein image segmentation method which based on simple linear iterative clustering (SLIC) method and Niblack threshold method. The SLIC method was used to pre-segment the original images into superpixels and all the information in superpixels were transferred into a matrix (Block Matrix). Subsequently, Niblack thresholding method is adopted to binarize Block Matrix. Finally, we obtained segmented vein images from binarized Block Matrix. According to several experiments, most part of vein skeleton is revealed compared to traditional Niblack segmentation algorithm.

  19. Nonlinear physical segmentation algorithm for determining the layer boundary from lidar signal.

    PubMed

    Mao, Feiyue; Li, Jun; Li, Chen; Gong, Wei; Min, Qilong; Wang, Wei

    2015-11-30

    Layer boundary (base and top) detection is a basic problem in lidar data processing, the results of which are used as inputs of optical properties retrieval. However, traditional algorithms not only require manual intervention but also rely heavily on the signal-to-noise ratio. Therefore, we propose a robust and automatic algorithm for layer detection based on a novel algorithm for lidar signal segmentation and representation. Our algorithm is based on the lidar equation and avoids most of the limitations of the traditional algorithms. Testing of the simulated and real signals shows that the algorithm is able to position the base and top accurately even with a low signal to noise ratio. Furthermore, the results of the classification are accurate and satisfactory. The experimental results confirm that our algorithm can be used for automatic detection, retrieval, and analysis of lidar data sets. PMID:26698806

  20. A Pulse Coupled Neural Network Segmentation Algorithm for Reflectance Confocal Images of Epithelial Tissue

    PubMed Central

    Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131

  1. A new method for mesoscale eddy detection based on watershed segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Lijuan; Dong, Qing; Xue, Cunjin; Hou, Xueyan; Song, Wanjiao

    2014-11-01

    Mesoscale eddies are widely found in the ocean. They play important roles in heat transport, momentum transport, ocean circulation and so on. The automatic detection of mesoscale eddies based on satellite remote sensing images is an important research topic. Some image processing methods have been applied to identify mesoscale eddies such as Canny operator, Hough transform and so forth, but the accuracy of detection was not very ideal. This paper described a new algorithm based on watershed segmentation algorithm for automatic detection of mesoscale eddies from sea level anomaly(SLA) image. Watershed segmentation algorithm has the disadvantage of over-segmentation. It is important to select appropriate markers. In this study, markers were selected from the reconstructed SLA image, which were used to modify the gradient image. Then two parameters, radius and amplitude of eddy, were used to filter the segmentation results. The method was tested on the Northwest Pacific using TOPEX/Poseidon altimeter data. The results are encouraging, showing that this algorithm is applicable for mesoscale eddies and has a good accuracy. This algorithm has a good response to weak edges and extracted eddies have complete and continuous boundaries. The eddy boundaries generally coincide with closed contours of SSH.

  2. Performance evaluation of a contextual news story segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Janvier, Bruno; Bruno, Eric; Marchand-Maillet, Stephane; Pun, Thierry

    2006-01-01

    The problem of semantic video structuring is vital for automated management of large video collections. The goal is to automatically extract from the raw data the inner structure of a video collection; so that a whole new range of applications to browse and search video collections can be derived out of this high-level segmentation. To reach this goal, we exploit techniques that consider the full spectrum of video content; it is fundamental to properly integrate technologies from the fields of computer vision, audio analysis, natural language processing and machine learning. In this paper, a multimodal feature vector providing a rich description of the audio, visual and text modalities is first constructed. Boosted Random Fields are then used to learn two types of relationships: between features and labels and between labels associated with various modalities for improved consistency of the results. The parameters of this enhanced model are found iteratively by using two successive stages of Boosting. We experimented using the TRECvid corpus and show results that validate the approach over existing studies.

  3. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  4. A fast algorithm for the phonemic segmentation of continuous speech

    NASA Astrophysics Data System (ADS)

    Smidt, D.

    1986-04-01

    The method of differential learning (DL method) was applied to the fast phonemic classification of acoustic speech spectra. The method was also tested with a simple algorithm for continuous speech recognition. In every learning step of the DL method only that single pattern component which deviates most from the reference value is used for a new rule. Several rules of this type were connected in a conjunctive or disjunctive way. Tests with a single speaker demonstrate good classification capability and a very high speed. The inclusion of automatically additional features selected according to their relevance is discussed. It is shown that there exists a correspondence between processes related to the DL method and pattern recognition in living beings with their ability for generalization and differentiation.

  5. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge.

    PubMed

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip Eddie; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-02-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi

  6. An evolutionary algorithm for the segmentation of muscles and bones of the lower limb.

    NASA Astrophysics Data System (ADS)

    Lpez, Marco A.; Braidot, A.; Sattler, Anbal; Schira, Claudia; Uriburu, E.

    2016-04-01

    In the field of medical image segmentation, muscles segmentation is a problem that has not been fully resolved yet. This is due to the fact that the basic assumption of image segmentation, which asserts that a visual distinction should ex- ist between the different structures to be identified, is infringed. As the tissue composition of two different muscles is the same, it becomes extremely difficult to distinguish one another if they are near. We have developed an evolutionary algorithm which selects the set and the sequence of morphological operators that better segments muscles and bones from an MRI image. The achieved results shows that the developed algorithm presents average sensitivity values close to 75% in the segmentation of the different processed muscles and bones. It also presents average specificity values close to 93% for the same structures. Furthermore, the algorithm can identify muscles that are closely located through the path from their origin point to their insertions, with very low error values (below 7%) .

  7. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  8. A martian case study of segmenting images automatically for granulometry and sedimentology, Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Karunatillake, Suniti; McLennan, Scott M.; Herkenhoff, Kenneth E.; Husch, Jonathan M.; Hardgrove, Craig; Skok, J. R.

    2014-02-01

    In planetary exploration, delineating individual grains in images via segmentation is a key path to sedimentological comparisons with the extensive terrestrial literature. Samples that contain a substantial fine grain component, common at Meridiani and Gusev at Mars, would involve prohibitive effort if attempted manually. Unavailability of physical samples also precludes standard terrestrial methods such as sieving. Furthermore, planetary scientists have been thwarted by the dearth of segmentation algorithms customized for planetary applications, including Mars, and often rely on sub-optimal solutions adapted from medical software. We address this with an original algorithm optimized to segment whole images from the Microscopic Imager of the Mars Exploration Rovers. While our code operates with minimal human guidance, its default parameters can be modified easily for different geologic settings and imagers on Earth and other planets, such as the Curiosity Rover’s Mars Hand Lens Instrument. We assess the algorithm’s robustness in a companion work.

  9. Automatic polyp region segmentation for colonoscopy images using watershed algorithm and ellipse segmentation

    NASA Astrophysics Data System (ADS)

    Hwang, Sae; Oh, JungHwan; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.

    2007-03-01

    In the US, colorectal cancer is the second leading cause of all cancer deaths behind lung cancer. Colorectal polyps are the precursor lesions of colorectal cancer. Therefore, early detection of polyps and at the same time removal of these precancerous lesions is one of the most important goals of colonoscopy. To objectively document detection and removal of colorectal polyps for quality purposes, and to facilitate real-time detection of polyps in the future, we have initiated a computer-based research program that analyzes video files created during colonoscopy. For computer-based detection of polyps, texture based techniques have been proposed. A major limitation of the existing texture-based analytical methods is that they depend on a fixed-size analytical window. Such a fixed-sized window may work for still images, but is not efficient for analysis of colonoscopy video files, where a single polyp can have different relative sizes and color features, depending on the viewing position and distance of the camera. In addition, the existing methods do not consider shape features. To overcome these problems, we here propose a novel polyp region segmentation method primarily based on the elliptical shape that nearly all small polyps and many larger polyps possess. Experimental results indicate that our proposed polyp detection method achieves a sensitivity and specificity of 93% and 98%, respectively.

  10. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  11. Topology correction of segmented medical images using a fast marching algorithm.

    PubMed

    Bazin, Pierre-Louis; Pham, Dzung L

    2007-11-01

    We present here a new method for correcting the topology of objects segmented from medical images. Whereas previous techniques alter a surface obtained from a binary segmentation of the object, our technique can be applied directly to the image intensities of a probabilistic or fuzzy segmentation, thereby propagating the topology for all isosurfaces of the object. From an analysis of topological changes and critical points in implicit surfaces, we derive a topology propagation algorithm that enforces any desired topology using a fast marching technique. The method has been applied successfully to the correction of the cortical gray matter/white matter interface in segmented brain images and is publicly released as a software plug-in for the MIPAV package. PMID:17942182

  12. New morphology independent detection and segmentation algorithm for galaxies

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Mohammad; Ichikawa, Takashi

    2015-08-01

    Due to their dynamic history, galaxy shapes can display a very rich and diverse distribution of shapes, with a large number of galaxies being classified as irregular in the local universe. As we look into higher redshifts, the fractions of such galaxies and their prominence in terms of mass apprently increases with more massive galaxies showing irregular profiles that fade very slowly into the image noise. The accurate study of such objects therefore needs detection and photometry techniques that impose negligible constraints on the shapes and profiles of their targets. We introduce a noise-based, non-parametric technique to detect normal, irregular or clumpy galaxies and their structure in noise. Noise based and non parametric imply that it imposes negligible constraints on the properties of the targets and that it employs no regression analysis or fittings. This technique is based on the fact that an object's signal will contiguously augment the noise inundating it. Detection is performed independent of the sky value. The detections are classified as true or false using the ambient noise as a reference, allowing a purity level of 0.86 as compared to 0.27 for SExtractor when a completeness of 1 is desired for a sample of extremely faint mock galaxy profiles. Defining the accuracy of detection as the difference of the measured sky with the known background of mock images, an order of magnitude less biased sky (and thus galaxy photometry) measurements is achieved. A non-parametric approach to defining substructure over a detected region is also introduced. NoiseChisel is our software implementation of this new technique. Contrary to the existing signal-based approach to detection, in its various implementations, signal related parameters such as the image point spread function or known object shapes and models are irrelevant here, which makes this algorithm very useful in astrophysical applications such as detection, photometry or morphological analysis of nebulous

  13. A unifying graph-cut image segmentation framework: algorithms it encompasses and equivalences among them

    NASA Astrophysics Data System (ADS)

    Ciesielski, Krzysztof Chris; Udupa, Jayaram K.; Falcão, A. X.; Miranda, P. A. V.

    2012-02-01

    We present a general graph-cut segmentation framework GGC, in which the delineated objects returned by the algorithms optimize the energy functions associated with the lp norm, 1 <= p <= ∞. Two classes of well known algorithms belong to GGC: the standard graph cut GC (such as the min-cut/max-flow algorithm) and the relative fuzzy connectedness algorithms RFC (including iterative RFC, IRFC). The norm-based description of GGC provides more elegant and mathematically better recognized framework of our earlier results from [18, 19]. Moreover, it allows precise theoretical comparison of GGC representable algorithms with the algorithms discussed in a recent paper [22] (min-cut/max-flow graph cut, random walker, shortest path/geodesic, Voronoi diagram, power watershed/shortest path forest), which optimize, via lp norms, the intermediate segmentation step, the labeling of scene voxels, but for which the final object need not optimize the used lp energy function. Actually, the comparison of the GGC representable algorithms with that encompassed in the framework described in [22] constitutes the main contribution of this work.

  14. Contour detection and completion for inpainting and segmentation based on topological gradient and fast marching algorithms.

    PubMed

    Auroux, Didier; Cohen, Laurent D; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  15. Algorithm for the identification of malfunctioning sensors in the control systems of segmented mirror telescopes.

    PubMed

    Chanan, Gary; Nelson, Jerry

    2009-11-10

    The active control systems of segmented mirror telescopes are vulnerable to a malfunction of a few (or even one) of their segment edge sensors, the effects of which can propagate through the entire system and seriously compromise the overall telescope image quality. Since there are thousands of such sensors in the extremely large telescopes now under development, it is essential to develop fast and efficient algorithms that can identify bad sensors so that they can be removed from the control loop. Such algorithms are nontrivial; for example, a simple residual-to-the-fit test will often fail to identify a bad sensor. We propose an algorithm that can reliably identify a single bad sensor and we extend it to the more difficult case of multiple bad sensors. Somewhat surprisingly, the identification of a fixed number of bad sensors does not necessarily become more difficult as the telescope becomes larger and the number of sensors in the control system increases. PMID:19904329

  16. Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud

    2012-11-01

    We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.

  17. Analyzing the medical image by using clustering algorithms through segmentation process

    NASA Astrophysics Data System (ADS)

    Kumar, Papendra; Kumar, Suresh

    2011-12-01

    Basic aim of our study is to analyze the medical image. In computer vision, segmentationRefers to the process of partitioning a digital image into multiple regions. The goal ofSegmentation is to simplify and/or change the representation of an image into something thatIs more meaningful and easier to analyze. Image segmentation is typically used to locateObjects and boundaries (lines, curves, etc.) in images.There is a lot of scope of the analysis that we have done in our project; our analysis couldBe used for the purpose of monitoring the medical image. Medical imaging refers to theTechniques and processes used to create images of the human body (or parts thereof) forClinical purposes (medical procedures seeking to reveal, diagnose or examine disease) orMedical science (including the study of normal anatomy and function).As a discipline and in its widest sense, it is part of biological imaging and incorporatesRadiology (in the wider sense), radiological sciences, endoscopy, (medical) thermography, Medical photography and microscopy (e.g. for human pathological investigations).Measurement and recording techniques which are not primarily designed to produce images.

  18. Analyzing the medical image by using clustering algorithms through segmentation process

    NASA Astrophysics Data System (ADS)

    Kumar, Papendra; Kumar, Suresh

    2012-01-01

    Basic aim of our study is to analyze the medical image. In computer vision, segmentationRefers to the process of partitioning a digital image into multiple regions. The goal ofSegmentation is to simplify and/or change the representation of an image into something thatIs more meaningful and easier to analyze. Image segmentation is typically used to locateObjects and boundaries (lines, curves, etc.) in images.There is a lot of scope of the analysis that we have done in our project; our analysis couldBe used for the purpose of monitoring the medical image. Medical imaging refers to theTechniques and processes used to create images of the human body (or parts thereof) forClinical purposes (medical procedures seeking to reveal, diagnose or examine disease) orMedical science (including the study of normal anatomy and function).As a discipline and in its widest sense, it is part of biological imaging and incorporatesRadiology (in the wider sense), radiological sciences, endoscopy, (medical) thermography, Medical photography and microscopy (e.g. for human pathological investigations).Measurement and recording techniques which are not primarily designed to produce images.

  19. A GPU accelerated moving mesh correspondence algorithm with applications to RV segmentation.

    PubMed

    Punithakumar, Kumaradevan; Noga, Michelle; Boulanger, Pierre

    2015-08-01

    This study proposes a parallel nonrigid registration algorithm to obtain point correspondence between a sequence of images. Several recent studies have shown that computation of point correspondence is an excellent way to delineate organs from a sequence of images, for example, delineation of cardiac right ventricle (RV) from a series of magnetic resonance (MR) images. However, nonrigid registration algorithms involve optimization of similarity functions, and are therefore, computationally expensive. We propose Graphics Processing Unit (GPU) computing to accelerate the algorithm. The proposed approach consists of two parallelization components: 1) parallel Compute Unified Device Architecture (CUDA) version of the non-rigid registration algorithm; and 2) application of an image concatenation approach to further parallelize the algorithm. The proposed approach was evaluated over a data set of 16 subjects and took an average of 4.36 seconds to segment a sequence of 19 MR images, a significant performance improvement over serial image registration approach. PMID:26737222

  20. Three-dimensional medical image reconstruction based on improved live wire segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yanfang; Jiang, Zhengang; He, Wei; Zhang, Yongsheng; Yang, Huamin

    2008-03-01

    Three-dimensional image reconstruction by volume rendering has two problems: time-consuming and low precision. During the diagnosis procedure, some detailed organ tissue is the interest to doctors, so the reconstructed two-dimensional images are pre-processed before three-dimensional reconstruction including disturbance removing and precise segmentation, to obtain Region-Of-Interest (ROI) based on which three-dimensional reconstruction carries through, that can decrease the complexity of time and space. By this, Live Wire segmentation algorithm model for medical image is improved to gain exact edge coordinate for the image segmentation with interior details by improved filling algorithm. Segmented images with object details only are regarded as input to realize volume rendering by ray casting tracking algorithm. Because the needless organs have been filtered, the disturbance on interested objects for doctors is reduced. Meanwhile, generally speaking, these needed organs left are less proportion in images. So it reduces data amount of volume rendering, and improves the speed of three-dimensional reconstruction.

  1. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  2. An Algorithm for the Segmentation of Highly Abnormal Hearts Using a Generic Statistical Shape Model.

    PubMed

    Alba, Xenia; Pereanez, Marco; Hoogendoorn, Corne; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F; Lekadir, Karim

    2016-03-01

    Statistical shape models (SSMs) have been widely employed in cardiac image segmentation. However, in conditions that induce severe shape abnormality and remodeling, such as in the case of pulmonary hypertension (PH) or hypertrophic cardiomyopathy (HCM), a single SSM is rarely capable of capturing the anatomical variability in the extremes of the distribution. This work presents a new algorithm for the segmentation of severely abnormal hearts. The algorithm is highly flexible, as it does not require a priori knowledge of the involved pathology or any specific parameter tuning to be applied to the cardiac image under analysis. The fundamental idea is to approximate the gross effect of the abnormality with a virtual remodeling transformation between the patient-specific geometry and the average shape of the reference model (e.g., average normal morphology). To define this mapping, a set of landmark points are automatically identified during boundary point search, by estimating the reliability of the candidate points. With the obtained transformation, the feature points extracted from the patient image volume are then projected onto the space of the reference SSM, where the model is used to effectively constrain and guide the segmentation process. The extracted shape in the reference space is finally propagated back to the original image of the abnormal heart to obtain the final segmentation. Detailed validation with patients diagnosed with PH and HCM shows the robustness and flexibility of the technique for the segmentation of highly abnormal hearts of different pathologies. PMID:26552082

  3. 3D MRI brain image segmentation based on region restricted EM algorithm

    NASA Astrophysics Data System (ADS)

    Li, Zhong; Fan, Jianping

    2008-03-01

    This paper presents a novel algorithm of 3D human brain tissue segmentation and classification in magnetic resonance image (MRI) based on region restricted EM algorithm (RREM). The RREM is a level set segmentation method while the evolution of the contours was driven by the force field composed by the probability density functions of the Gaussian models. Each tissue is modeled by one or more Gaussian models restricted by free shaped contour so that the Gaussian models are adaptive to the local intensities. The RREM is guaranteed to be convergency and achieving the local minimum. The segmentation avoids to be trapped in the local minimum by the split and merge operation. A fuzzy rule based classifier finally groups the regions belonging to the same tissue and forms the segmented 3D image of white matter (WM) and gray matter (GM) which are of major interest in numerous applications. The presented method can be extended to segment brain images with tumor or the images having part of the brain removed with the adjusted classifier.

  4. Feature measures for the segmentation of neuronal membrane using a machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Iftikhar, Saadia; Godil, Afzal

    2013-12-01

    In this paper, we present a Support Vector Machine (SVM) based pixel classifier for a semi-automated segmentation algorithm to detect neuronal membrane structures in stacks of electron microscopy images of brain tissue samples. This algorithm uses high-dimensional feature spaces extracted from center-surrounded patches, and some distinct edge sensitive features for each pixel in the image, and a training dataset for the segmentation of neuronal membrane structures and background. Some threshold conditions are later applied to remove small regions, which are below a certain threshold criteria, and morphological operations, such as the filling of the detected objects, are done to get compactness in the objects. The performance of the segmentation method is calculated on the unseen data by using three distinct error measures: pixel error, wrapping error, and rand error, and also a pixel by pixel accuracy measure with their respective ground-truth. The trained SVM classifier achieves the best precision level in these three distinct errors at 0.23, 0.016 and 0.15, respectively; while the best accuracy using pixel by pixel measure reaches 77% on the given dataset. The results presented here are one step further towards exploring possible ways to solve these hard problems, such as segmentation in medical image analysis. In the future, we plan to extend it as a 3D segmentation approach for 3D datasets to not only retain the topological structures in the dataset but also for the ease of further analysis.

  5. Numerical arc segmentation algorithm for a radio conference - A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    A detailed description of a Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software package for communication satellite systems planning is presented. This software provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC - 88) on the use of the GEO and the planning of space services utilizing GEO. The features of the NASARC software package are described, and detailed information is given about the function of each of the four NASARC program modules. The results of a sample world scenario are presented and discussed.

  6. An effective method for segmentation of MR brain images using the ant colony optimization algorithm.

    PubMed

    Taherdangkoo, Mohammad; Bagheri, Mohammad Hadi; Yazdi, Mehran; Andriole, Katherine P

    2013-12-01

    Since segmentation of magnetic resonance images is one of the most important initial steps in brain magnetic resonance image processing, success in this part has a great influence on the quality of outcomes of subsequent steps. In the past few decades, numerous methods have been introduced for classification of such images, but typically they perform well only on a specific subset of images, do not generalize well to other image sets, and have poor computational performance. In this study, we provided a method for segmentation of magnetic resonance images of the brain that despite its simplicity has a high accuracy. We compare the performance of our proposed algorithm with similar evolutionary algorithms on a pixel-by-pixel basis. Our algorithm is tested across varying sets of magnetic resonance images and demonstrates high speed and accuracy. It should be noted that in initial steps, the algorithm is computationally intensive requiring a large number of calculations; however, in subsequent steps of the search process, the number is reduced with the segmentation focused only in the target area. PMID:23563793

  7. Novel real-time volumetric tool segmentation algorithm for intraoperative microscope integrated OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Viehland, Christian; Keller, Brenton; Carrasco-Zevallos, Oscar; Cunefare, David; Shen, Liangbo; Toth, Cynthia; Farsiu, Sina; Izatt, Joseph A.

    2016-03-01

    Optical coherence tomography (OCT) allows for micron scale imaging of the human retina and cornea. Current generation research and commercial intrasurgical OCT prototypes are limited to live B-scan imaging. Our group has developed an intraoperative microscope integrated OCT system capable of live 4D imaging. With a heads up display (HUD) 4D imaging allows for dynamic intrasurgical visualization of tool tissue interaction and surgical maneuvers. Currently our system relies on operator based manual tracking to correct for patient motion and motion caused by the surgeon, to track the surgical tool, and to display the correct B-scan to display on the HUD. Even when tracking only bulk motion, the operator sometimes lags behind and the surgical region of interest can drift out of the OCT field of view. To facilitate imaging we report on the development of a fast volume based tool segmentation algorithm. The algorithm is based on a previously reported volume rendering algorithm and can identify both the tool and retinal surface. The algorithm requires 45 ms per volume for segmentation and can be used to actively place the B-scan across the tool tissue interface. Alternatively, real-time tool segmentation can be used to allow the surgeon to use the surgical tool as an interactive B-scan pointer.

  8. Hepatic Arterial Configuration in Relation to the Segmental Anatomy of the Liver; Observations on MDCT and DSA Relevant to Radioembolization Treatment

    SciTech Connect

    Hoven, Andor F. van den Leeuwen, Maarten S. van Lam, Marnix G. E. H. Bosch, Maurice A. A. J. van den

    2015-02-15

    PurposeCurrent anatomical classifications do not include all variants relevant for radioembolization (RE). The purpose of this study was to assess the individual hepatic arterial configuration and segmental vascularization pattern and to develop an individualized RE treatment strategy based on an extended classification.MethodsThe hepatic vascular anatomy was assessed on MDCT and DSA in patients who received a workup for RE between February 2009 and November 2012. Reconstructed MDCT studies were assessed to determine the hepatic arterial configuration (origin of every hepatic arterial branch, branching pattern and anatomical course) and the hepatic segmental vascularization territory of all branches. Aberrant hepatic arteries were defined as hepatic arterial branches that did not originate from the celiac axis/CHA/PHA. Early branching patterns were defined as hepatic arterial branches originating from the celiac axis/CHA.ResultsThe hepatic arterial configuration and segmental vascularization pattern could be assessed in 110 of 133 patients. In 59 patients (54 %), no aberrant hepatic arteries or early branching was observed. Fourteen patients without aberrant hepatic arteries (13 %) had an early branching pattern. In the 37 patients (34 %) with aberrant hepatic arteries, five also had an early branching pattern. Sixteen different hepatic arterial segmental vascularization patterns were identified and described, differing by the presence of aberrant hepatic arteries, their respective vascular territory, and origin of the artery vascularizing segment four.ConclusionsThe hepatic arterial configuration and segmental vascularization pattern show marked individual variability beyond well-known classifications of anatomical variants. We developed an individualized RE treatment strategy based on an extended anatomical classification.

  9. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations. PMID:26405895

  10. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  11. Application of Micro-segmentation Algorithms to the Healthcare Market:A Case Study

    SciTech Connect

    Sukumar, Sreenivas R; Aline, Frank

    2013-01-01

    We draw inspiration from the recent success of loyalty programs and targeted personalized market campaigns of retail companies such as Kroger, Netflix, etc. to understand beneficiary behaviors in the healthcare system. Our posit is that we can emulate the financial success the companies have achieved by better understanding and predicting customer behaviors and translating such success to healthcare operations. Towards that goal, we survey current practices in market micro-segmentation research and analyze health insurance claims data using those algorithms. We present results and insights from micro-segmentation of the beneficiaries using different techniques and discuss how the interpretation can assist with matching the cost-effective insurance payment models to the beneficiary micro-segments.

  12. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  13. Multispectral image segmentation using parallel mean shift algorithm and CUDA technology

    NASA Astrophysics Data System (ADS)

    Zghidi, Hafedh; Walczak, Maksym; Świtoński, Adam

    2016-06-01

    We present a parallel mean shift algorithm running on CUDA and its possible application in segmentation of multispectral images. The aim of this paper is to present a method of analyzing highly noised multispectral images of various objects, so that important features are enhanced and easier to identify. The algorithm finds applications in analysis of multispectral images of eyes so that certain features visible only in specific wavelengths are made clearly visible despite high level of noise, for which processing time is very long.

  14. A Segmentation Algorithm for X-ray 3D Angiography and Vessel Catheterization

    SciTech Connect

    Franchi, Danilo; Rosa, Luigi; Placidi, Giuseppe

    2008-11-06

    Vessel Catheterization is a clinical procedure usually performed by a specialist by means of X-ray fluoroscopic guide with contrast-media. In the present paper, we present a simple and efficient algorithm for vessel segmentation which allows vessel separation and extraction from the background (noise and signal coming from other organs). This would reduce the number of projections (X-ray scans) to reconstruct a complete and accurate 3D vascular model and the radiological risk, in particular for the patient. In what follows, the algorithm is described and some preliminary experimental results are reported illustrating the behaviour of the proposed method.

  15. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  16. US-Cut: interactive algorithm for rapid detection and segmentation of liver tumors in ultrasound acquisitions

    NASA Astrophysics Data System (ADS)

    Egger, Jan; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Chen, Xiaojun; Zoller, Wolfram G.; Schmalstieg, Dieter; Hann, Alexander

    2016-04-01

    Ultrasound (US) is the most commonly used liver imaging modality worldwide. It plays an important role in follow-up of cancer patients with liver metastases. We present an interactive segmentation approach for liver tumors in US acquisitions. Due to the low image quality and the low contrast between the tumors and the surrounding tissue in US images, the segmentation is very challenging. Thus, the clinical practice still relies on manual measurement and outlining of the tumors in the US images. We target this problem by applying an interactive segmentation algorithm to the US data, allowing the user to get real-time feedback of the segmentation results. The algorithm has been developed and tested hand-in-hand by physicians and computer scientists to make sure a future practical usage in a clinical setting is feasible. To cover typical acquisitions from the clinical routine, the approach has been evaluated with dozens of datasets where the tumors are hyperechoic (brighter), hypoechoic (darker) or isoechoic (similar) in comparison to the surrounding liver tissue. Due to the interactive real-time behavior of the approach, it was possible even in difficult cases to find satisfying segmentations of the tumors within seconds and without parameter settings, and the average tumor deviation was only 1.4mm compared with manual measurements. However, the long term goal is to ease the volumetric acquisition of liver tumors in order to evaluate for treatment response. Additional aim is the registration of intraoperative US images via the interactive segmentations to the patient's pre-interventional CT acquisitions.

  17. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  18. A pixel-connecting algorithm for enhancement and segmentation of computed tomography scans

    SciTech Connect

    Yanof, J.H.

    1990-01-01

    The objective of the study was to enhance and segment X-ray Computerized Tomography (CT) scans. To deal with the noise and spatial complexity of these images, a relaxation algorithm was developed in order to link pixels together into homogeneous regions. Each pixel is assigned a set of weighted links to its nearest neighbors. The links are initially isotropic, and are arranged into stochastic link matrices. By computing powers of the link matrix, an object-dependent weighting mask for each pixel over an expanded neighborhood is found. The masks are used to compute a similarity measure between pixels in order to adjust the interlinks. The edges, which segment the image, are identified by the below-threshold links, and the displayed mask-weighted averages result in an enhanced image. The algorithm seems robust w.r.t. the five images tested: the images based on the weighted averages have a smooth appearance with sharpened edges. The algorithm has successfully segmented primary liver tumors of varying sizes and shapes. The links which drop below threshold highlight anatomical details of the scans which are difficult to visualize with the unaided eye.

  19. Standardized Evaluation System for Left Ventricular Segmentation Algorithms in 3D Echocardiography.

    PubMed

    Bernard, Olivier; Bosch, Johan G; Heyde, Brecht; Alessandrini, Martino; Barbosa, Daniel; Camarasu-Pop, Sorina; Cervenansky, Frederic; Valette, Sebastien; Mirea, Oana; Bernier, Michel; Jodoin, Pierre-Marc; Domingos, Jaime Santo; Stebbing, Richard V; Keraudren, Kevin; Oktay, Ozan; Caballero, Jose; Shi, Wei; Rueckert, Daniel; Milletari, Fausto; Ahmadi, Seyed-Ahmad; Smistad, Erik; Lindseth, Frank; van Stralen, Maartje; Wang, Chen; Smedby, Orjan; Donal, Erwan; Monaghan, Mark; Papachristidis, Alex; Geleijnse, Marcel L; Galli, Elena; D'hooge, Jan

    2016-04-01

    Real-time 3D Echocardiography (RT3DE) has been proven to be an accurate tool for left ventricular (LV) volume assessment. However, identification of the LV endocardium remains a challenging task, mainly because of the low tissue/blood contrast of the images combined with typical artifacts. Several semi and fully automatic algorithms have been proposed for segmenting the endocardium in RT3DE data in order to extract relevant clinical indices, but a systematic and fair comparison between such methods has so far been impossible due to the lack of a publicly available common database. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms developed to segment the LV border in RT3DE. A database consisting of 45 multivendor cardiac ultrasound recordings acquired at different centers with corresponding reference measurements from three experts are made available. The algorithms from nine research groups were quantitatively evaluated and compared using the proposed online platform. The results showed that the best methods produce promising results with respect to the experts' measurements for the extraction of clinical indices, and that they offer good segmentation precision in terms of mean distance error in the context of the experts' variability range. The platform remains open for new submissions. PMID:26625409

  20. Automatic segmentation of the liver using multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images

    NASA Astrophysics Data System (ADS)

    Jang, Yujin; Hong, Helen; Chung, Jin Wook; Yoon, Young Ho

    2012-02-01

    We propose an effective technique for the extraction of liver boundary based on multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images. Our method is composed of four main steps. First, for extracting an optimal volume circumscribing a liver, lower and side boundaries are defined by positional information of pelvis and rib. An upper boundary is defined by separating the lungs and heart from CT images. Second, for extracting an initial liver volume, optimal liver volume is smoothed by anisotropic diffusion filtering and is segmented using adaptively selected threshold value. Third, for removing neighbor organs from initial liver volume, morphological opening and connected component labeling are applied to multiple planes. Finally, for refining the liver boundaries, deformable surface model is applied to a posterior liver surface and missing left robe in previous step. Then, probability summation map is generated by calculating regional information of the segmented liver in coronal plane, which is used for restoring the inaccurate liver boundaries. Experimental results show that our segmentation method can accurately extract liver boundaries without leakage to neighbor organs in spite of various liver shape and ambiguous boundary.

  1. An image segmentation based on a genetic algorithm for determining soil coverage by crop residues.

    PubMed

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P; Pajares, Gonzalo; del Arco, Maria J Sanchez; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm "El Encín" in Alcalá de Henares (Madrid, Spain). PMID:22163966

  2. An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues

    PubMed Central

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966

  3. A new segmentation algorithm for lunar surface terrain based on CCD images

    NASA Astrophysics Data System (ADS)

    Jiang, Hong-Kun; Tian, Xiao-Lin; Xu, Ao-Ao

    2015-09-01

    Terrain classification is one of the critical steps used in lunar geomorphologic analysis and landing site selection. Most of the published works have focused on a Digital Elevation Model (DEM) to distinguish different regions of lunar terrain. This paper presents an algorithm that can be applied to lunar CCD images by blocking and clustering according to image features, which can accurately distinguish between lunar highland and lunar mare. The new algorithm, compared with the traditional algorithm, can improve classification accuracy. The new algorithm incorporates two new features and one Tamura texture feature. The new features are generating an enhanced image histogram and modeling the properties of light reflection, which can represent the geological characteristics based on CCD gray level images. These features are applied to identify texture in order to perform image clustering and segmentation by a weighted Euclidean distance to distinguish between lunar mare and lunar highlands. The new algorithm has been tested on Chang'e-1 CCD data and the testing result has been compared with geological data published by the U.S. Geological Survey. The result has shown that the algorithm can effectively distinguish the lunar mare from highlands in CCD images. The overall accuracy of the proposed algorithm is satisfactory, and the Kappa coefficient is 0.802, which is higher than the result of combining the DEM with CCD images.

  4. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm

    PubMed Central

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively. PMID:26284175

  5. A contiguity-enhanced k-means clustering algorithm for unsupervised multispectral image segmentation

    SciTech Connect

    Theiler, J.; Gisler, G.

    1997-07-01

    The recent and continuing construction of multi and hyper spectral imagers will provide detailed data cubes with information in both the spatial and spectral domain. This data shows great promise for remote sensing applications ranging from environmental and agricultural to national security interests. The reduction of this voluminous data to useful intermediate forms is necessary both for downlinking all those bits and for interpreting them. Smart onboard hardware is required, as well as sophisticated earth bound processing. A segmented image (in which the multispectral data in each pixel is classified into one of a small number of categories) is one kind of intermediate form which provides some measure of data compression. Traditional image segmentation algorithms treat pixels independently and cluster the pixels according only to their spectral information. This neglects the implicit spatial information that is available in the image. We will suggest a simple approach; a variant of the standard k-means algorithm which uses both spatial and spectral properties of the image. The segmented image has the property that pixels which are spatially contiguous are more likely to be in the same class than are random pairs of pixels. This property naturally comes at some cost in terms of the compactness of the clusters in the spectral domain, but we have found that the spatial contiguity and spectral compactness properties are nearly orthogonal, which means that we can make considerable improvements in the one with minimal loss in the other.

  6. Automatic Segmentation of Phalanx and Epiphyseal/Metaphyseal Region by Gamma Parameter Enhancement Algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, C. W.; Chen, C. Y.; Jong, T. L.; Liu, T. C.; Chiu, C. H.

    2012-01-01

    The performance of bone age assessment is highly correlated with the extraction of bony tissue from soft tissues, and the key problem is how to successfully separate epiphyseal/metaphyseal region of interests (EMROIs) from the background and soft tissue. In our experiment, a series of image preprocessing procedures are used to exclude the background and locate the EMROIs of left-hand radiographs. Subsequently, automatic gamma parameter enhancement is applied to test the two segmentation methods (adaptive two-means clustering algorithm and gradient vector flow snake) among children of different age (the age from 2 to 16 years for 80 girls and boys). Four error measurements of misclassification error, relative foreground area error, modified Hausdorff distances, and edge mismatch, are included to evaluate the segmentation performance. The result shows that the two segmentation algorithms are corresponding to different ranges of optimal gamma parameters. Furthermore, the margin of EMROIs can be obtained more precisely by developing an automatic bone age assessment method with the gamma parameter enhancement.

  7. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    SciTech Connect

    Bae, JangPyo; Kim, Namkug Lee, Sang Min; Seo, Joon Beom; Kim, Hee Chan

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results of 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart

  8. Numerical arc segmentation algorithm for a radio conference: A software tool for communication satellite systems planning

    NASA Technical Reports Server (NTRS)

    Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.

    1988-01-01

    The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.

  9. Digital Terrain from a Two-Step Segmentation and Outlier-Based Algorithm

    NASA Astrophysics Data System (ADS)

    Hingee, Kassel; Caccetta, Peter; Caccetta, Louis; Wu, Xiaoliang; Devereaux, Drew

    2016-06-01

    We present a novel ground filter for remotely sensed height data. Our filter has two phases: the first phase segments the DSM with a slope threshold and uses gradient direction to identify candidate ground segments; the second phase fits surfaces to the candidate ground points and removes outliers. Digital terrain is obtained by a surface fit to the final set of ground points. We tested the new algorithm on digital surface models (DSMs) for a 9600km2 region around Perth, Australia. This region contains a large mix of land uses (urban, grassland, native forest and plantation forest) and includes both a sandy coastal plain and a hillier region (elevations up to 0.5km). The DSMs are captured annually at 0.2m resolution using aerial stereo photography, resulting in 1.2TB of input data per annum. Overall accuracy of the filter was estimated to be 89.6% and on a small semi-rural subset our algorithm was found to have 40% fewer errors compared to Inpho's Match-T algorithm.

  10. SVM algorithm based on wavelet kernel function for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Tian, Jinwen; Liu, Jian; Wei, Fang

    2009-10-01

    Along with more demand for 3D reconstruction, quantitative analysis and visualization, the more precise segmentation of medical image is required, especially MR head image. But the segmentation of MRI will be much more complex and difficult because of indistinct boundaries between brain tissues due to their overlapping and penetrating with each other, intrinsic uncertainty of MR images induced by heterogeneity of magnetic field, partial volume effect and noise. After studying the kernel function conditions of support vector, we constructed wavelet SVM algorithm based on wavelet kernel function. Its convergence and commonality as well as generalization are analyzed. The comparative experiments are made using the different number of training samples and the different scans, and it .The wavelet SVM can be extended easily and experiment results show that the SVM classifier offers lower computational time and better classification precision and it has good function approximation ability.

  11. A segmentation algorithm for automated tracking of fast swimming unlabelled cells in three dimensions.

    PubMed

    Pimentel, J A; Carneiro, J; Darszon, A; Corkidi, G

    2012-01-01

    Recent advances in microscopy and cytolabelling methods enable the real time imaging of cells as they move and interact in their real physiological environment. Scenarios in which multiple cells move autonomously in all directions are not uncommon in biology. A remarkable example is the swimming of marine spermatozoa in search of the conspecific oocyte. Imaging cells in these scenarios, particularly when they move fast and are poorly labelled or even unlabelled requires very fast three-dimensional time-lapse (3D+t) imaging. This 3D+t imaging poses challenges not only to the acquisition systems but also to the image analysis algorithms. It is in this context that this work describes an original automated multiparticle segmentation method to analyse motile translucent cells in 3D microscopical volumes. The proposed segmentation technique takes advantage of the way the cell appearance changes with the distance to the focal plane position. The cells translucent properties and their interaction with light produce a specific pattern: when the cell is within or close to the focal plane, its two-dimensional (2D) appearance matches a bright spot surrounded by a dark ring, whereas when it is farther from the focal plane the cell contrast is inverted looking like a dark spot surrounded by a bright ring. The proposed method analyses the acquired video sequence frame-by-frame taking advantage of 2D image segmentation algorithms to identify and select candidate cellular sections. The crux of the method is in the sequential filtering of the candidate sections, first by template matching of the in-focus and out-of-focus templates and second by considering adjacent candidates sections in 3D. These sequential filters effectively narrow down the number of segmented candidate sections making the automatic tracking of cells in three dimensions a straightforward operation. PMID:21999166

  12. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells

    PubMed Central

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-01-01

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908

  13. Segments.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Presents a market taxonomy for higher education, including what it reveals about the structure of the market, the model's technical attributes, and its capacity to explain pricing behavior. Details the identification of the principle seams separating one market segment from another and how student aspirations help to organize the market, making…

  14. Optimized adaptation algorithm for HEVC/H.265 dynamic adaptive streaming over HTTP using variable segment duration

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2016-04-01

    Adaptive video streaming using HTTP has become popular in recent years for commercial video delivery. The recent MPEG-DASH standard allows interoperability and adaptability between servers and clients from different vendors. The delivery of the MPD (Media Presentation Description) files in DASH and the DASH client behaviours are beyond the scope of the DASH standard. However, the different adaptation algorithms employed by the clients do affect the overall performance of the system and users' QoE (Quality of Experience), hence the need for research in this field. Moreover, standard DASH delivery is based on fixed segments of the video. However, there is no standard segment duration for DASH where various fixed segment durations have been employed by different commercial solutions and researchers with their own individual merits. Most recently, the use of variable segment duration in DASH has emerged but only a few preliminary studies without practical implementation exist. In addition, such a technique requires a DASH client to be aware of segment duration variations, and this requirement and the corresponding implications on the DASH system design have not been investigated. This paper proposes a segment-duration-aware bandwidth estimation and next-segment selection adaptation strategy for DASH. Firstly, an MPD file extension scheme to support variable segment duration is proposed and implemented in a realistic hardware testbed. The scheme is tested on a DASH client, and the tests and analysis have led to an insight on the time to download next segment and the buffer behaviour when fetching and switching between segments of different playback durations. Issues like sustained buffering when switching between segments of different durations and slow response to changing network conditions are highlighted and investigated. An enhanced adaptation algorithm is then proposed to accurately estimate the bandwidth and precisely determine the time to download the next

  15. Segmentation algorithm via Cellular Neural/Nonlinear Network: implementation on Bio-inspired hardware platform

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Vecchio, Pietro; Grassi, Giuseppe

    2011-12-01

    The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for Foreman and Car-phone video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frames/s. Comparisons with existing CNN-based methods show that the conceived approach is more accurate, thus representing a good trade-off between real-time requirements and accuracy.

  16. A quantum mechanics-based algorithm for vessel segmentation in retinal images

    NASA Astrophysics Data System (ADS)

    Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa

    2016-03-01

    Blood vessel segmentation is an important step in retinal image analysis. It is one of the steps required for computer-aided detection of ophthalmic diseases. In this paper, a novel quantum mechanics-based algorithm for retinal vessel segmentation is presented. The algorithm consists of three major steps. The first step is the preprocessing of the images to prepare the images for further processing. The second step is feature extraction where a set of four features is generated at each image pixel. These features are then combined using a nonlinear transformation for dimensionality reduction. The final step is applying a recently proposed quantum mechanics-based framework for image processing. In this step, pixels are mapped to quantum systems that are allowed to evolve from an initial state to a final state governed by Schrödinger's equation. The evolution is controlled by the Hamiltonian operator which is a function of the extracted features at each pixel. A measurement step is consequently performed to determine whether the pixel belongs to vessel or non-vessel classes. Many functional forms of the Hamiltonian are proposed, and the best performing form was selected. The algorithm is tested on the publicly available DRIVE database. The average results for sensitivity, specificity, and accuracy are 80.29, 97.34, and 95.83 %, respectively. These results are compared to some recently published techniques showing the superior performance of the proposed method. Finally, the implementation of the algorithm on a quantum computer and the challenges facing this implementation are introduced.

  17. A quantum mechanics-based algorithm for vessel segmentation in retinal images

    NASA Astrophysics Data System (ADS)

    Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa

    2016-06-01

    Blood vessel segmentation is an important step in retinal image analysis. It is one of the steps required for computer-aided detection of ophthalmic diseases. In this paper, a novel quantum mechanics-based algorithm for retinal vessel segmentation is presented. The algorithm consists of three major steps. The first step is the preprocessing of the images to prepare the images for further processing. The second step is feature extraction where a set of four features is generated at each image pixel. These features are then combined using a nonlinear transformation for dimensionality reduction. The final step is applying a recently proposed quantum mechanics-based framework for image processing. In this step, pixels are mapped to quantum systems that are allowed to evolve from an initial state to a final state governed by Schrödinger's equation. The evolution is controlled by the Hamiltonian operator which is a function of the extracted features at each pixel. A measurement step is consequently performed to determine whether the pixel belongs to vessel or non-vessel classes. Many functional forms of the Hamiltonian are proposed, and the best performing form was selected. The algorithm is tested on the publicly available DRIVE database. The average results for sensitivity, specificity, and accuracy are 80.29, 97.34, and 95.83 %, respectively. These results are compared to some recently published techniques showing the superior performance of the proposed method. Finally, the implementation of the algorithm on a quantum computer and the challenges facing this implementation are introduced.

  18. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study.

    PubMed

    Rudyanto, Rina D; Kerkstra, Sjoerd; van Rikxoort, Eva M; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, Ilkay; Ünay, Devrim; Kadipaşaoğlu, Kamuran; Estépar, Raúl San José; Ross, James C; Washko, George R; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C; Fabijanska, Anna; Smistad, Erik; Elster, Anne C; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G H; Campo, Arantza; Prokop, Mathias; de Jong, Pim A; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2014-10-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  19. SAR Image Segmentation with Unknown Number of Classes Combined Voronoi Tessellation and Rjmcmc Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Q. H.; Li, Y.; Wang, Y.

    2016-06-01

    This paper presents a novel segmentation method for automatically determining the number of classes in Synthetic Aperture Radar (SAR) images by combining Voronoi tessellation and Reversible Jump Markov Chain Monte Carlo (RJMCMC) strategy. Instead of giving the number of classes a priori, it is considered as a random variable and subject to a Poisson distribution. Based on Voronoi tessellation, the image is divided into homogeneous polygons. By Bayesian paradigm, a posterior distribution which characterizes the segmentation and model parameters conditional on a given SAR image can be obtained up to a normalizing constant; Then, a Revisable Jump Markov Chain Monte Carlo(RJMCMC) algorithm involving six move types is designed to simulate the posterior distribution, the move types including: splitting or merging real classes, updating parameter vector, updating label field, moving positions of generating points, birth or death of generating points and birth or death of an empty class. Experimental results with real and simulated SAR images demonstrate that the proposed method can determine the number of classes automatically and segment homogeneous regions well.

  20. An Algorithm for Obtaining the Distribution of 1-Meter Lightning Channel Segment Altitudes for Application in Lightning NOx Production Estimation

    NASA Technical Reports Server (NTRS)

    Peterson, Harold; Koshak, William J.

    2009-01-01

    An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.

  1. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    PubMed

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. PMID:25393432

  2. Larynx Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Larynx Anatomy Add to My Pictures View /Download : Small: 648x576 ... View Download Large: 2700x2400 View Download Title: Larynx Anatomy Description: Anatomy of the larynx; drawing shows the ...

  3. Pharynx Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Pharynx Anatomy Add to My Pictures View /Download : Small: 720x576 ... View Download Large: 3000x2400 View Download Title: Pharynx Anatomy Description: Anatomy of the pharynx; drawing shows the ...

  4. Vulva Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Vulva Anatomy Add to My Pictures View /Download : Small: 720x634 ... View Download Large: 3000x2640 View Download Title: Vulva Anatomy Description: Anatomy of the vulva; drawing shows the ...

  5. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  6. Fast and robust segmentation of solar EUV images: algorithm and results for solar cycle 23

    NASA Astrophysics Data System (ADS)

    Barra, V.; Delouille, V.; Kretzschmar, M.; Hochedez, J.-F.

    2009-10-01

    Context: The study of the variability of the solar corona and the monitoring of coronal holes, quiet sun and active regions are of great importance in astrophysics as well as for space weather and space climate applications. Aims: In a previous work, we presented the spatial possibilistic clustering algorithm (SPoCA). This is a multi-channel unsupervised spatially-constrained fuzzy clustering method that automatically segments solar extreme ultraviolet (EUV) images into regions of interest. The results we reported on SoHO-EIT images taken from February 1997 to May 2005 were consistent with previous knowledge in terms of both areas and intensity estimations. However, they presented some artifacts due to the method itself. Methods: Herein, we propose a new algorithm, based on SPoCA, that removes these artifacts. We focus on two points: the definition of an optimal clustering with respect to the regions of interest, and the accurate definition of the cluster edges. We moreover propose methodological extensions to this method, and we illustrate these extensions with the automatic tracking of active regions. Results: The much improved algorithm can decompose the whole set of EIT solar images over the 23rd solar cycle into regions that can clearly be identified as quiet sun, coronal hole and active region. The variations of the parameters resulting from the segmentation, i.e. the area, mean intensity, and relative contribution to the solar irradiance, are consistent with previous results and thus validate the decomposition. Furthermore, we find indications for a small variation of the mean intensity of each region in correlation with the solar cycle. Conclusions: The method is generic enough to allow the introduction of other channels or data. New applications are now expected, e.g. related to SDO-AIA data.

  7. Segmenting clouds from space : a hybrid multispectral classification algorithm for satellite imagery.

    SciTech Connect

    Post, Brian Nelson; Wilson, Mark P.; Smith, Jody Lynn; Wehlburg, Joseph Cornelius; Nandy, Prabal

    2005-07-01

    This paper reports on a novel approach to atmospheric cloud segmentation from a space based multi-spectral pushbroom satellite system. The satellite collects 15 spectral bands ranging from visible, 0.45 um, to long wave infra-red (IR), 10.7um. The images are radiometrically calibrated and have ground sample distances (GSD) of 5 meters for visible to very near IR bands and a GSD of 20 meters for near IR to long wave IR. The algorithm consists of a hybrid-classification system in the sense that supervised and unsupervised networks are used in conjunction. For performance evaluation, a series of numerical comparisons to human derived cloud borders were performed. A set of 33 scenes were selected to represent various climate zones with different land cover from around the world. The algorithm consisted of the following. Band separation was performed to find the band combinations which form significant separation between cloud and background classes. The potential bands are fed into a K-Means clustering algorithm in order to identify areas in the image which have similar centroids. Each cluster is then compared to the cloud and background prototypes using the Jeffries-Matusita distance. A minimum distance is found and each unknown cluster is assigned to their appropriate prototype. A classification rate of 88% was found when using one short wave IR band and one mid-wave IR band. Past investigators have reported segmentation accuracies ranging from 67% to 80%, many of which require human intervention. A sensitivity of 75% and specificity of 90% were reported as well.

  8. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    PubMed

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  9. A two-dimensional Segmented Boundary Algorithm for complex moving solid boundaries in Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Khorasanizade, Sh.; Sousa, J. M. M.

    2016-03-01

    A Segmented Boundary Algorithm (SBA) is proposed to deal with complex boundaries and moving bodies in Smoothed Particle Hydrodynamics (SPH). Boundaries are formed in this algorithm with chains of lines obtained from the decomposition of two-dimensional objects, based on simple line geometry. Various two-dimensional, viscous fluid flow cases have been studied here using a truly incompressible SPH method with the aim of assessing the capabilities of the SBA. Firstly, the flow over a stationary circular cylinder in a plane channel was analyzed at steady and unsteady regimes, for a single value of blockage ratio. Subsequently, the flow produced by a moving circular cylinder with a prescribed acceleration inside a plane channel was investigated as well. Next, the simulation of the flow generated by the impulsive start of a flat plate, again inside a plane channel, has been carried out. This was followed by the study of confined sedimentation of an elliptic body subjected to gravity, for various density ratios. The set of test cases was completed with the simulation of periodic flow around a sunflower-shaped object. Extensive comparisons of the results obtained here with published data have demonstrated the accuracy and effectiveness of the proposed algorithms, namely in cases involving complex geometries and moving bodies.

  10. Brain MR image segmentation with spatial constrained K-mean algorithm and dual-tree complex wavelet transform.

    PubMed

    Zhang, Jingdan; Jiang, Wuhan; Wang, Ruichun; Wang, Le

    2014-09-01

    In brain MR images, the noise and low-contrast significantly deteriorate the segmentation results. In this paper, we propose an automatic unsupervised segmentation method integrating dual-tree complex wavelet transform (DT-CWT) with K-mean algorithm for brain MR image. Firstly, a multi-dimensional feature vector is constructed based on the intensity, the low-frequency subband of DT-CWT and spatial position information. Then, a spatial constrained K-mean algorithm is presented as the segmentation system. The proposed method is validated by extensive experiments using both simulated and real T1-weighted MR images, and compared with the state-of-the-art algorithms. PMID:24994513

  11. Infrared active polarimetric imaging system controlled by image segmentation algorithms: application to decamouflage

    NASA Astrophysics Data System (ADS)

    Vannier, Nicolas; Goudail, François; Plassart, Corentin; Boffety, Matthieu; Feneyrou, Patrick; Leviandier, Luc; Galland, Frédéric; Bertaux, Nicolas

    2016-05-01

    We describe an active polarimetric imager with laser illumination at 1.5 µm that can generate any illumination and analysis polarization state on the Poincar sphere. Thanks to its full polarization agility and to image analysis of the scene with an ultrafast active-contour based segmentation algorithm, it can perform adaptive polarimetric contrast optimization. We demonstrate the capacity of this imager to detect manufactured objects in different types of environments for such applications as decamouflage and hazardous object detection. We compare two imaging modes having different number of polarimetric degrees of freedom and underline the characteristics that a polarimetric imager aimed at this type of applications should possess.

  12. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation.

    PubMed

    Poznyakovskiy, Anton A; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  13. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation

    PubMed Central

    Poznyakovskiy, Anton A.; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  14. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  15. Segmentation of SoHO/EIT Images using fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Delouille, V.; Barra, V.; Hochedez, J.

    2007-12-01

    The study of the variability of the solar corona and the monitoring of its traditional regions (Coronal Holes, Quiet Sun and Active Regions) are of great importance in astrophysics as well as in view of the Space Weather and Space Climate applications. In this presentation, I will propose a multi-channel unsupervised spatially- constrained fuzzy clustering algorithm that automatically segments EUV solar images into Coronal Holes, Quiet Sun and Active Regions. The use of Fuzzy logic allows to manage the various noises present in the images and the imprecision in the definition of the above mentioned regions. The process is fast and automatic. It is applied to SoHO-EIT images taken from January 1997 till May 2005, spanning thus almost a full solar cycle. Results in terms of areas and intensity estimations are consistent with previous knowledge. The method reveal the rotational and other mid-term periodicities in the extracted time series across solar cycle 23. Further, such an approach paves the way to bridging observations between spatially resolved data from imaging telescopes and time series from radiometers. Time series resulting form the segmentation of EUV coronal images can indeed provide an essential component in the process of reconstructing the solar spectrum.

  16. A novel Iterative algorithm to text segmentation for web born-digital images

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen

    2015-07-01

    Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.

  17. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method. PMID:25248211

  18. An Iris Segmentation Algorithm based on Edge Orientation for Off-angle Iris Recognition

    SciTech Connect

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J; Boehnen, Chris Bensing

    2013-01-01

    Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texture etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.

  19. An iris segmentation algorithm based on edge orientation for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Boehnen, Christopher

    2013-03-01

    Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texture etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.

  20. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  1. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  2. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  3. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  4. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  5. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  6. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology. PMID:27273293

  7. SU-E-J-142: Performance Study of Automatic Image-Segmentation Algorithms in Motion Tracking Via MR-IGRT

    SciTech Connect

    Feng, Y; Olsen, J.; Parikh, P.; Noel, C; Wooten, H; Du, D; Mutic, S; Hu, Y; Kawrakow, I; Dempsey, J

    2014-06-01

    Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE), along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information

  8. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms.

    PubMed

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly A

    2013-02-15

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  9. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms

    PubMed Central

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

    2013-01-01

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  10. A fast underwater optical image segmentation algorithm based on a histogram weighted fuzzy c-means improved by PSO

    NASA Astrophysics Data System (ADS)

    Wang, Shilong; Xu, Yuru; Pang, Yongjie

    2011-03-01

    The S/N of an underwater image is low and has a fuzzy edge. If using traditional methods to process it directly, the result is not satisfying. Though the traditional fuzzy C-means algorithm can sometimes divide the image into object and background, its time-consuming computation is often an obstacle. The mission of the vision system of an autonomous underwater vehicle (AUV) is to rapidly and exactly deal with the information about the object in a complex environment for the AUV to use the obtained result to execute the next task. So, by using the statistical characteristics of the gray image histogram, a fast and effective fuzzy C-means underwater image segmentation algorithm was presented. With the weighted histogram modifying the fuzzy membership, the above algorithm can not only cut down on a large amount of data processing and storage during the computation process compared with the traditional algorithm, so as to speed up the efficiency of the segmentation, but also improve the quality of underwater image segmentation. Finally, particle swarm optimization (PSO) described by the sine function was introduced to the algorithm mentioned above. It made up for the shortcomings that the FCM algorithm can not get the global optimal solution. Thus, on the one hand, it considers the global impact and achieves the local optimal solution, and on the other hand, further greatly increases the computing speed. Experimental results indicate that the novel algorithm can reach a better segmentation quality and the processing time of each image is reduced. They enhance efficiency and satisfy the requirements of a highly effective, real-time AUV.

  11. Evaluation of an automatic segmentation algorithm for definition of head and neck organs at risk

    PubMed Central

    2014-01-01

    Background The accurate definition of organs at risk (OARs) is required to fully exploit the benefits of intensity-modulated radiotherapy (IMRT) for head and neck cancer. However, manual delineation is time-consuming and there is considerable inter-observer variability. This is pertinent as function-sparing and adaptive IMRT have increased the number and frequency of delineation of OARs. We evaluated accuracy and potential time-saving of Smart Probabilistic Image Contouring Engine (SPICE) automatic segmentation to define OARs for salivary-, swallowing- and cochlea-sparing IMRT. Methods Five clinicians recorded the time to delineate five organs at risk (parotid glands, submandibular glands, larynx, pharyngeal constrictor muscles and cochleae) for each of 10 CT scans. SPICE was then used to define these structures. The acceptability of SPICE contours was initially determined by visual inspection and the total time to modify them recorded per scan. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm created a reference standard from all clinician contours. Clinician, SPICE and modified contours were compared against STAPLE by the Dice similarity coefficient (DSC) and mean/maximum distance to agreement (DTA). Results For all investigated structures, SPICE contours were less accurate than manual contours. However, for parotid/submandibular glands they were acceptable (median DSC: 0.79/0.80; mean, maximum DTA: 1.5 mm, 14.8 mm/0.6 mm, 5.7 mm). Modified SPICE contours were also less accurate than manual contours. The utilisation of SPICE did not result in time-saving/improve efficiency. Conclusions Improvements in accuracy of automatic segmentation for head and neck OARs would be worthwhile and are required before its routine clinical implementation. PMID:25086641

  12. Metal Artifact Reduction and Segmentation of Dental Computerized Tomography Images Using Least Square Support Vector Machine and Mean Shift Algorithm

    PubMed Central

    Mortaheb, Parinaz; Rezaeian, Mehdi

    2016-01-01

    Segmentation and three-dimensional (3D) visualization of teeth in dental computerized tomography (CT) images are of dentists’ requirements for both abnormalities diagnosis and the treatments such as dental implant and orthodontic planning. On the other hand, dental CT image segmentation is a difficult process because of the specific characteristics of the tooth's structure. This paper presents a method for automatic segmentation of dental CT images. We present a multi-step method, which starts with a preprocessing phase to reduce the metal artifact using the least square support vector machine. Integral intensity profile is then applied to detect each tooth's region candidates. Finally, the mean shift algorithm is used to partition the region of each tooth, and all these segmented slices are then applied for 3D visualization of teeth. Examining the performance of our proposed approach, a set of reliable assessment metrics is utilized. We applied the segmentation method on 14 cone-beam CT datasets. Functionality analysis of the proposed method demonstrated precise segmentation results on different sample slices. Accuracy analysis of the proposed method indicates that we can increase the sensitivity, specificity, precision, and accuracy of the segmentation results by 83.24%, 98.35%, 72.77%, and 97.62% and decrease the error rate by 2.34%. The experimental results show that the proposed approach performs well on different types of CT images and has better performance than all existing approaches. Moreover, segmentation results can be more accurate by using the proposed algorithm of metal artifact reduction in the preprocessing phase. PMID:27014607

  13. Eye Anatomy

    MedlinePlus

    ... News About Us Donate In This Section Eye Anatomy en Español email Send this article to a ... You at Risk For Glaucoma? Childhood Glaucoma Eye Anatomy Five Common Glaucoma Tests Glaucoma Facts and Stats ...

  14. Paraganglioma Anatomy

    MedlinePlus

    ... e.g. -historical Searches are case-insensitive Paraganglioma Anatomy Add to My Pictures View /Download : Small: 648x576 ... View Download Large: 2700x2400 View Download Title: Paraganglioma Anatomy Description: Paraganglioma of the head and neck; drawing ...

  15. Tooth anatomy

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/002214.htm Tooth anatomy To use the sharing features on this page, ... upper jawbone is called the maxilla. Images Tooth anatomy References Lingen MW. Head and neck. In: Kumar ...

  16. Heart Anatomy

    MedlinePlus

    ... Incredible Machine Bonus poster (PDF) The Human Heart Anatomy Blood The Conduction System The Coronary Arteries The ... of the Leg Vasculature of the Torso Heart anatomy illustrations and animations for grades K-6. Heart ...

  17. Simulation of 3D MRI brain images for quantitative evaluation of image segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich

    2000-06-01

    To model the true shape of MRI brain images, automatically classified T1-weighted 3D MRI images (gray matter, white matter, cerebrospinal fluid, scalp/bone and background) are utilized for simulation of grayscale data and imaging artifacts. For each class, Gaussian distribution of grayscale values is assumed, and mean and variance are computed from grayscale images. A random generator fills up the class images with Gauss-distributed grayscale values. Since grayscale values of neighboring voxels are not correlated, a Gaussian low-pass filtering is done, preserving class region borders. To simulate anatomical variability, a Gaussian distribution in space with user-defined mean and variance can be added at any user-defined position. Several imaging artifacts can be added: (1) to simulate partial volume effects, every voxel is averaged with neighboring voxels if they have a different class label; (2) a linear or quadratic bias field can be added with user-defined strength and orientation; (3) additional background noise can be added; and (4) artifacts left over after spoiling can be simulated by adding a band with increasing/decreasing grayscale values. With this method, realistic-looking simulated MRI images can be produced to test classification and segmentation algorithms regarding accuracy and robustness even in the presence of artifacts.

  18. Local Area Signal-to-Noise Ratio (LASNR) algorithm for Image Segmentation

    SciTech Connect

    Kegelmeyer, L; Fong, P; Glenn, S; Liebman, J

    2007-07-03

    Many automated image-based applications have need of finding small spots in a variably noisy image. For humans, it is relatively easy to distinguish objects from local surroundings no matter what else may be in the image. We attempt to capture this distinguishing capability computationally by calculating a measurement that estimates the strength of signal within an object versus the noise in its local neighborhood. First, we hypothesize various sizes for the object and corresponding background areas. Then, we compute the Local Area Signal to Noise Ratio (LASNR) at every pixel in the image, resulting in a new image with LASNR values for each pixel. All pixels exceeding a pre-selected LASNR value become seed pixels, or initiation points, and are grown to include the full area extent of the object. Since growing the seed is a separate operation from finding the seed, each object can be any size and shape. Thus, the overall process is a 2-stage segmentation method that first finds object seeds and then grows them to find the full extent of the object. This algorithm was designed, optimized and is in daily use for the accurate and rapid inspection of optics from a large laser system (National Ignition Facility (NIF), Lawrence Livermore National Laboratory, Livermore, CA), which includes images with background noise, ghost reflections, different illumination and other sources of variation.

  19. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  20. A region segmentation based algorithm for building a crystal position lookup table in a scintillation detector

    NASA Astrophysics Data System (ADS)

    Wang, Hai-Peng; Yun, Ming-Kai; Liu, Shuang-Quan; Fan, Xin; Cao, Xue-Xiang; Chai, Pei; Shan, Bao-Ci

    2015-03-01

    In a scintillation detector, scintillation crystals are typically made into a 2-dimensional modular array. The location of incident gamma-ray needs be calibrated due to spatial response nonlinearity. Generally, position histograms-the characteristic flood response of scintillation detectors-are used for position calibration. In this paper, a position calibration method based on a crystal position lookup table which maps the inaccurate location calculated by Anger logic to the exact hitting crystal position has been proposed. Firstly, the position histogram is preprocessed, such as noise reduction and image enhancement. Then the processed position histogram is segmented into disconnected regions, and crystal marking points are labeled by finding the centroids of regions. Finally, crystal boundaries are determined and the crystal position lookup table is generated. The scheme is evaluated by the whole-body positron emission tomography (PET) scanner and breast dedicated single photon emission computed tomography scanner developed by the Institute of High Energy Physics, Chinese Academy of Sciences. The results demonstrate that the algorithm is accurate, efficient, robust and applicable to any configurations of scintillation detector. Supported by National Natural Science Foundation of China (81101175) and XIE Jia-Lin Foundation of Institute of High Energy Physics (Y3546360U2)

  1. An improved MLC segmentation algorithm and software for step-and-shoot IMRT delivery without tongue-and-groove error

    SciTech Connect

    Luan Shuang; Wang Chao; Chen, Danny Z.; Hu, Xiaobo S.; Naqvi, Shahid A.; Wu Xingen; Yu, Cedric X.

    2006-05-15

    We present an improved multileaf collimator (MLC) segmentation algorithm, denoted by SLS{sub NOTG} (static leaf sequencing with no tongue-and-groove error), for step-and-shoot intensity-modulated radiation therapy (IMRT) delivery. SLS{sub NOTG} is an improvement over the MLC segmentation algorithm called SLS that was developed by Luan et al. [Med. Phys. 31(4), 695-707 (2004)], which did not consider tongue-and-groove error corrections. The aims of SLS{sub NOTG} are (1) shortening the treatment times of IMRT plans by minimizing their numbers of segments and (2) minimizing the tongue-and-groove errors of the computed IMRT plans. The input to SLS{sub NOTG} is intensity maps (IMs) produced by current planning systems, and its output is (modified) optimized leaf sequences without tongue-and-groove error. Like the previous SLS algorithm [Luan et al., Med. Phys. 31(4), 695-707 (2004)], SLS{sub NOTG} is also based on graph algorithmic techniques in computer science. It models the MLC segmentation problem as a weighted minimum-cost path problem, where the weight of the path is the number of segments and the cost of the path is the amount of tongue-and-groove error. Our comparisons of SLS{sub NOTG} with CORVUS indicated that for the same intensity maps, the numbers of segments computed by SLS{sub NOTG} are up to 50% less than those by CORVUS 5.0 on the Elekta LINAC system. Our clinical verifications have shown that the dose distributions of the SLS{sub NOTG} plans do not have tongue-and-groove error and match those of the corresponding CORVUS plans, thus confirming the correctness of SLS{sub NOTG}. Comparing with existing segmentation methods, SLS{sub NOTG} also has two additional advantages: (1) SLS{sub NOTG} can compute leaf sequences whose tongue-and-groove error is minimized subject to a constraint on the maximum allowed number of segments, which may be desirable in clinical situations where a treatment with the complete correction of tongue-and-groove error takes too

  2. Improving performance of computer-aided detection of pulmonary embolisms by incorporating a new pulmonary vascular-tree segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Song, XiaoFei; Chapman, Brian E.; Zheng, Bin

    2012-03-01

    We developed a new pulmonary vascular tree segmentation/extraction algorithm. The purpose of this study was to assess whether adding this new algorithm to our previously developed computer-aided detection (CAD) scheme of pulmonary embolism (PE) could improve the CAD performance (in particular reducing false positive detection rates). A dataset containing 12 CT examinations with 384 verified pulmonary embolism regions associated with 24 threedimensional (3-D) PE lesions was selected in this study. Our new CAD scheme includes the following image processing and feature classification steps. (1) A 3-D based region growing process followed by a rolling-ball algorithm was utilized to segment lung areas. (2) The complete pulmonary vascular trees were extracted by combining two approaches of using an intensity-based region growing to extract the larger vessels and a vessel enhancement filtering to extract the smaller vessel structures. (3) A toboggan algorithm was implemented to identify suspicious PE candidates in segmented lung or vessel area. (4) A three layer artificial neural network (ANN) with the topology 27-10-1 was developed to reduce false positive detections. (5) A k-nearest neighbor (KNN) classifier optimized by a genetic algorithm was used to compute detection scores for the PE candidates. (6) A grouping scoring method was designed to detect the final PE lesions in three dimensions. The study showed that integrating the pulmonary vascular tree extraction algorithm into the CAD scheme reduced false positive rates by 16.2%. For the case based 3D PE lesion detecting results, the integrated CAD scheme achieved 62.5% detection sensitivity with 17.1 false-positive lesions per examination.

  3. Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images.

    PubMed

    Karim, Rashed; Bhagirath, Pranav; Claus, Piet; Housden, R James; Chen, Zhong; Karimaghaloo, Zahra; Sohn, Hyon-Mok; Lara Rodríguez, Laura; Vera, Sergio; Albà, Xènia; Hennemuth, Anja; Peitgen, Heinz-Otto; Arbel, Tal; Gonzàlez Ballester, Miguel A; Frangi, Alejandro F; Götte, Marco; Razavi, Reza; Schaeffter, Tobias; Rhode, Kawal

    2016-05-01

    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges. PMID:26891066

  4. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering.

    PubMed

    Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu

    2015-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM). PMID:26649068

  5. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering

    PubMed Central

    Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu

    2015-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM). PMID:26649068

  6. Development, Implementation and Evaluation of Segmentation Algorithms for the Automatic Classification of Cervical Cells

    NASA Astrophysics Data System (ADS)

    Macaulay, Calum Eric

    Cancer of the uterine cervix is one of the most common cancers in women. An effective screening program for pre-cancerous and cancerous lesions can dramatically reduce the mortality rate for this disease. In British Columbia where such a screening program has been in place for some time, 2500 to 3000 slides of cervical smears need to be examined daily. More than 35 years ago, it was recognized that an automated pre-screening system could greatly assist people in this task. Such a system would need to find and recognize stained cells, segment the images of these cells into nucleus and cytoplasm, numerically describe the characteristics of the cells, and use these features to discriminate between normal and abnormal cells. The thrust of this work was (1) to research and develop new segmentation methods and compare their performance to those in the literature, (2) to determine dependence of the numerical cell descriptors on the segmentation method used, (3) to determine the dependence of cell classification accuracy on the segmentation used, and (4) to test the hypothesis that using numerical cell descriptors one can correctly classify the cells. The segmentation accuracies of 32 different segmentation procedures were examined. It was found that the best nuclear segmentation procedure was able to correctly segment 98% of the nuclei of a 1000 and a 3680 image database. Similarly the best cytoplasmic segmentation procedure was found to correctly segment 98.5% of the cytoplasm of the same 1000 image database. Sixty-seven different numerical cell descriptors (features) were calculated for every segmented cell. On a database of 800 classified cervical cells these features when used in a linear discriminant function analysis could correctly classify 98.7% of the normal cells and 97.0% of the abnormal cells. While some features were found to vary a great deal between segmentation procedures, the classification accuracy of groups of features was found to be independent of the

  7. Evaluation of an algorithm for semiautomated segmentation of thin tissue layers in high-frequency ultrasound images.

    PubMed

    Qiu, Qiang; Dunmore-Buyze, Joy; Boughner, Derek R; Lacefield, James C

    2006-02-01

    An algorithm consisting of speckle reduction by median filtering, contrast enhancement using top- and bottom-hat morphological filters, and segmentation with a discrete dynamic contour (DDC) model was implemented for nondestructive measurements of soft tissue layer thickness. Algorithm performance was evaluated by segmenting simulated images of three-layer phantoms and high-frequency (40 MHz) ultrasound images of porcine aortic valve cusps in vitro. The simulations demonstrated the necessity of the median and morphological filtering steps and enabled testing of user-specified parameters of the morphological filters and DDC model. In the experiments, six cusps were imaged in coronary perfusion solution (CPS) then in distilled water to test the algorithm's sensitivity to changes in the dimensions of thin tissue layers. Significant increases in the thickness of the fibrosa, spongiosa, and ventricularis layers, by 53.5% (p < 0.001), 88.5% (p < 0.001), and 35.1% (p = 0.033), respectively, were observed when the specimens were submerged in water. The intraobserver coefficient of variation of repeated thickness estimates ranged from 0.044 for the fibrosa in water to 0.164 for the spongiosa in CPS. Segmentation accuracy and variability depended on the thickness and contrast of the layers, but the modest variability provides confidence in the thickness measurements. PMID:16529107

  8. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  9. Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation

    PubMed Central

    Rabbani, Hossein; Kazemian Jahromi, Mahdi; Jorjandi, Sahar; Mehri Dehnavi, Alireza; Hajizadeh, Fedra; Peyman, Alireza

    2016-01-01

    Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy. PMID:27247559

  10. [Segmentation of Winter Wheat Canopy Image Based on Visual Spectral and Random Forest Algorithm].

    PubMed

    Liu, Ya-dong; Cui, Ri-xian

    2015-12-01

    Digital image analysis has been widely used in non-destructive monitoring of crop growth and nitrogen nutrition status due to its simplicity and efficiency. It is necessary to segment winter wheat plant from soil background for accessing canopy cover, intensity level of visible spectrum (R, G, and B) and other color indices derived from RGB. In present study, according to the variation in R, G, and B components of sRGB color space and L*, a*, and b* components of CIEL* a* b* color space between wheat plant and soil background, the segmentation of wheat plant from soil background were conducted by the Otsu's method based on a* component of CIEL* a* b* color space, and RGB based random forest method, and CIEL* a* b* based random forest method, respectively. Also the ability to segment wheat plant from soil background was evaluated with the value of segmentation accuracy. The results showed that all three methods had revealed good ability to segment wheat plant from soil background. The Otsu's method had lowest segmentation accuracy in comparison with the other two methods. There were only little difference in segmentation error between the two random forest methods. In conclusion, the random forest method had revealed its capacity to segment wheat plant from soil background with only the visual spectral information of canopy image without any color components combinations or any color space transformation. PMID:26964234

  11. Integer anatomy

    SciTech Connect

    Doolittle, R.

    1994-11-15

    The title integer anatomy is intended to convey the idea of a systematic method for displaying the prime decomposition of the integers. Just as the biological study of anatomy does not teach us all things about behavior of species neither would we expect to learn everything about the number theory from a study of its anatomy. But, some number-theoretic theorems are illustrated by inspection of integer anatomy, which tend to validate the underlying structure and the form as developed and displayed in this treatise. The first statement to be made in this development is: the way structure of the natural numbers is displayed depends upon the allowed operations.

  12. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  13. Centerline-based colon segmentation for CT colonography

    SciTech Connect

    Frimmel, Hans; Naeppi, J.; Yoshida, H.

    2005-08-15

    We have developed a fully automated algorithm for colon segmentation, centerline-based segmentation (CBS), which is faster than any of the previously presented segmentation algorithms, but also has high sensitivity as well as high specificity. The algorithm first thresholds a set of unprocessed CT slices. Outer air is removed, after which a bounding box is computed. A centerline is computed for all remaining regions in the thresholded volume, disregarding segments related to extracolonic structures. Centerline segments are connected, after which the anatomy-based removal of segments representing extracolonic structures occurs. Segments related to the remaining centerline are locally region grown, and the colonic wall is found by dilation. Shape-based interpolation provides an isotropic mask. For 38 CT datasets, CBS was compared with the knowledge-guided segmentation (KGS) algorithm for sensitivity and specificity. With use of a 1.5 GHz AMD Athlon-based PC, the average computation time for the segmentation was 14.8 s. The sensitivity was, on average, 96%, and the specificity was 99%. A total of 21% of the voxels segmented by KGS, of which 96% represented extracolonic structures and 4% represented the colon, were removed.

  14. The life-cycle of upper-tropospheric jet streams identified with a novel data segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Limbach, S.; Schömer, E.; Wernli, H.

    2010-09-01

    Jet streams are prominent features of the upper-tropospheric atmospheric flow. Through the thermal wind relationship these regions with intense horizontal wind speed (typically larger than 30 m/s) are associated with pronounced baroclinicity, i.e., with regions where extratropical cyclones develop due to baroclinic instability processes. Individual jet streams are non-stationary elongated features that can extend over more than 2000 km in the along-flow and 200-500 km in the across-flow direction, respectively. Their lifetime can vary between a few days and several weeks. In recent years, feature-based algorithms have been developed that allow compiling synoptic climatologies and typologies of upper-tropospheric jet streams based upon objective selection criteria and climatological reanalysis datasets. In this study a novel algorithm to efficiently identify jet streams using an extended region-growing segmentation approach is introduced. This algorithm iterates over a 4-dimensional field of horizontal wind speed from ECMWF analyses and decides at each grid point whether all prerequisites for a jet stream are met. In a single pass the algorithm keeps track of all adjacencies of these grid points and creates the 4-dimensional connected segments associated with each jet stream. In addition to the detection of these sets of connected grid points, the algorithm analyzes the development over time of the distinct 3-dimensional features each segment consists of. Important events in the development of these features, for example mergings and splittings, are detected and analyzed on a per-grid-point and per-feature basis. The output of the algorithm consists of the actual sets of grid-points augmented with information about the particular events, and of the so-called event graphs, which are an abstract representation of the distinct 3-dimensional features and events of each segment. This technique provides comprehensive information about the frequency of upper

  15. Hemodynamic Segmentation of Brain Perfusion Images with Delay and Dispersion Effects Using an Expectation-Maximization Algorithm

    PubMed Central

    Lu, Chia-Feng; Guo, Wan-Yuo; Chang, Feng-Chi; Huang, Shang-Ran; Chou, Yen-Chun; Wu, Yu-Te

    2013-01-01

    Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified. PMID:23894386

  16. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection.

    PubMed

    Doshi, Jimit; Erus, Guray; Ou, Yangming; Resnick, Susan M; Gur, Ruben C; Gur, Raquel E; Satterthwaite, Theodore D; Furth, Susan; Davatzikos, Christos

    2016-02-15

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  17. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data.

    PubMed

    Spiegel, M; Redel, T; Struffert, T; Hornegger, J; Doerfler, A

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling. PMID:21908904

  18. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data

    NASA Astrophysics Data System (ADS)

    Spiegel, M.; Redel, T.; Struffert, T.; Hornegger, J.; Doerfler, A.

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling.

  19. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-04-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm (k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z if T z was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  20. Comparative evaluation of a novel 3D segmentation algorithm on in-treatment radiotherapy cone beam CT images

    NASA Astrophysics Data System (ADS)

    Price, Gareth; Moore, Chris

    2007-03-01

    Image segmentation and delineation is at the heart of modern radiotherapy, where the aim is to deliver as high a radiation dose as possible to a cancerous target whilst sparing the surrounding healthy tissues. This, of course, requires that a radiation oncologist dictates both where the tumour and any nearby critical organs are located. As well as in treatment planning, delineation is of vital importance in image guided radiotherapy (IGRT): organ motion studies demand that features across image databases are accurately segmented, whilst if on-line adaptive IGRT is to become a reality, speedy and correct target identification is a necessity. Recently, much work has been put into the development of automatic and semi-automatic segmentation tools, often using prior knowledge to constrain some grey level, or derivative thereof, interrogation algorithm. It is hoped that such techniques can be applied to organ at risk and tumour segmentation in radiotherapy. In this work, however, we make the assumption that grey levels do not necessarily determine a tumour's extent, especially in CT where the attenuation coefficient can often vary little between cancerous and normal tissue. In this context we present an algorithm that generates a discontinuity free delineation surface driven by user placed, evidence based support points. In regions of sparse user supplied information, prior knowledge, in the form of a statistical shape model, provides guidance. A small case study is used to illustrate the method. Multiple observers (between 3 and 7) used both the presented tool and a commercial manual contouring package to delineate the bladder on a serially imaged (10 cone beam CT volumes ) prostate patient. A previously presented shape analysis technique is used to quantitatively compare the observer variability.

  1. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA)

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A.; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N.; Zangwill, Linda M.

    2016-01-01

    Purpose We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. Methods We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Results Mean visual field mean deviation at baseline of the progressing glaucoma group was −7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit–intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm Conclusions Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility. PMID:26906156

  2. Spatial Patterns of Trees from Airborne LiDAR Using a Simple Tree Segmentation Algorithm

    NASA Astrophysics Data System (ADS)

    Jeronimo, S.; Kane, V. R.; McGaughey, R. J.; Franklin, J. F.

    2015-12-01

    Objectives for management of forest ecosystems on public land incorporate a focus on maintenance and restoration of ecological functions through silvicultural manipulation of forest structure. The spatial pattern of residual trees - the horizontal element of structure - is a key component of ecological restoration prescriptions. We tested the ability of a simple LiDAR individual tree segmentation method - the watershed transform - to generate spatial pattern metrics similar to those obtained by the traditional method - ground-based stem mapping - on forested plots representing the structural diversity of a large wilderness area (Yosemite NP) and a large managed area (Sierra NF) in the Sierra Nevada, Calif. Most understory and intermediate-canopy trees were not detected by the LiDAR segmentation; however, LiDAR- and field-based assessments of spatial pattern in terms of tree clump size distributions largely agreed. This suggests that (1) even when individual tree segmentation is not effective for tree density estimates, it can provide a good measurement of tree spatial pattern, and (2) a simple segmentation method is adequate to measure spatial pattern of large areas with a diversity of structural characteristics. These results lay the groundwork for a LiDAR tool to assess clumping patterns across forest landscapes in support of restoration silviculture. This tool could describe spatial patterns of functionally intact reference ecosystems, measure departure from reference targets in treatment areas, and, with successive acquisitions, monitor treatment efficacy.

  3. Image segmentation using joint spatial-intensity-shape features: application to CT lung nodule segmentation

    NASA Astrophysics Data System (ADS)

    Ye, Xujiong; Siddique, Musib; Douiri, Abdel; Beddoe, Gareth; Slabaugh, Greg

    2009-02-01

    Automatic segmentation of medical images is a challenging problem due to the complexity and variability of human anatomy, poor contrast of the object being segmented, and noise resulting from the image acquisition process. This paper presents a novel feature-guided method for the segmentation of 3D medical lesions. The proposed algorithm combines 1) a volumetric shape feature (shape index) based on high-order partial derivatives; 2) mean shift clustering in a joint spatial-intensity-shape (JSIS) feature space; and 3) a modified expectation-maximization (MEM) algorithm on the mean shift mode map to merge the neighboring regions (modes). In such a scenario, the volumetric shape feature is integrated into the process of the segmentation algorithm. The joint spatial-intensity-shape features provide rich information for the segmentation of the anatomic structures or lesions (tumors). The proposed method has been evaluated on a clinical dataset of thoracic CT scans that contains 68 nodules. A volume overlap ratio between each segmented nodule and the ground truth annotation is calculated. Using the proposed method, the mean overlap ratio over all the nodules is 0.80. On visual inspection and using a quantitative evaluation, the experimental results demonstrate the potential of the proposed method. It can properly segment a variety of nodules including juxta-vascular and juxta-pleural nodules, which are challenging for conventional methods due to the high similarity of intensities between the nodules and their adjacent tissues. This approach could also be applied to lesion segmentation in other anatomies, such as polyps in the colon.

  4. Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm.

    PubMed

    Abdullah, Muhammad; Fraz, Muhammad Moazam; Barman, Sarah A

    2016-01-01

    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. PMID:27190713

  5. Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm

    PubMed Central

    Abdullah, Muhammad; Barman, Sarah A.

    2016-01-01

    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. PMID:27190713

  6. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  7. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  8. An automated image segmentation and classification algorithm for immunohistochemically stained tumor cell nuclei

    NASA Astrophysics Data System (ADS)

    Yeo, Hangu; Sheinin, Vadim; Sheinin, Yuri

    2009-02-01

    As medical image data sets are digitized and the number of data sets is increasing exponentially, there is a need for automated image processing and analysis technique. Most medical imaging methods require human visual inspection and manual measurement which are labor intensive and often produce inconsistent results. In this paper, we propose an automated image segmentation and classification method that identifies tumor cell nuclei in medical images and classifies these nuclei into two categories, stained and unstained tumor cell nuclei. The proposed method segments and labels individual tumor cell nuclei, separates nuclei clusters, and produces stained and unstained tumor cell nuclei counts. The representative fields of view have been chosen by a pathologist from a known diagnosis (clear cell renal cell carcinoma), and the automated results are compared with the hand-counted results by a pathologist.

  9. A sport scene images segmentation method based on edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Biqing

    2011-12-01

    This paper proposes a simple, fast sports scene image segmentation method; a lot of work so far has been looking for a way to reduce the different shades of emotions in smooth area. A novel method of pretreatment, proposed the elimination of different shades feelings. Internal filling mechanism is used to change the pixels enclosed by the interest as interest pixels. For some test has achieved harvest sports scene images has been confirmed.

  10. Applying the algorithm "assessing quality using image registration circuits" (AQUIRC) to multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Datteri, Ryan; Asman, Andrew J.; Landman, Bennett A.; Dawant, Benoit M.

    2014-03-01

    Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.

  11. A Clustering Algorithm for Ecological Stream Segment Identification from Spatially Extensive Digital Databases

    NASA Astrophysics Data System (ADS)

    Brenden, T. O.; Clark, R. D.; Wiley, M. J.; Seelbach, P. W.; Wang, L.

    2005-05-01

    Remote sensing and geographic information systems have made it possible to attribute variables for streams at increasingly detailed resolutions (e.g., individual river reaches). Nevertheless, management decisions still must be made at large scales because land and stream managers typically lack sufficient resources to manage on an individual reach basis. Managers thus require a method for identifying stream management units that are ecologically similar and that can be expected to respond similarly to management decisions. We have developed a spatially-constrained clustering algorithm that can merge neighboring river reaches with similar ecological characteristics into larger management units. The clustering algorithm is based on the Cluster Affinity Search Technique (CAST), which was developed for clustering gene expression data. Inputs to the clustering algorithm are the neighbor relationships of the reaches that comprise the digital river network, the ecological attributes of the reaches, and an affinity value, which identifies the minimum similarity for merging river reaches. In this presentation, we describe the clustering algorithm in greater detail and contrast its use with other methods (expert opinion, classification approach, regular clustering) for identifying management units using several Michigan watersheds as a backdrop.

  12. A new algorithm for segmentation of cardiac quiescent phases and cardiac time intervals using seismocardiography

    NASA Astrophysics Data System (ADS)

    Jafari Tadi, Mojtaba; Koivisto, Tero; Pänkäälä, Mikko; Paasio, Ari; Knuutila, Timo; Teräs, Mika; Hänninen, Pekka

    2015-03-01

    Systolic time intervals (STI) have significant diagnostic values for a clinical assessment of the left ventricle in adults. This study was conducted to explore the feasibility of using seismocardiography (SCG) to measure the systolic timings of the cardiac cycle accurately. An algorithm was developed for the automatic localization of the cardiac events (e.g. the opening and closing moments of the aortic and mitral valves). Synchronously acquired SCG and electrocardiography (ECG) enabled an accurate beat to beat estimation of the electromechanical systole (QS2), pre-ejection period (PEP) index and left ventricular ejection time (LVET) index. The performance of the algorithm was evaluated on a healthy test group with no evidence of cardiovascular disease (CVD). STI values were corrected based on Weissler's regression method in order to assess the correlation between the heart rate and STIs. One can see from the results that STIs correlate poorly with the heart rate (HR) on this test group. An algorithm was developed to visualize the quiescent phases of the cardiac cycle. A color map displaying the magnitude of SCG accelerations for multiple heartbeats visualizes the average cardiac motions and thereby helps to identify quiescent phases. High correlation between the heart rate and the duration of the cardiac quiescent phases was observed.

  13. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    DOE PAGESBeta

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-06

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinatemore » system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.« less

  14. Implementation of a cellular neural network-based segmentation algorithm on the bio-inspired vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Grassi, Giuseppe; Vecchio, Pietro; Arik, Sabri; Yalcin, M. Erhan

    2011-01-01

    Based on the cellular neural network (CNN) paradigm, the bio-inspired (bi-i) cellular vision system is a computing platform consisting of state-of-the-art sensing, cellular sensing-processing and digital signal processing. This paper presents the implementation of a novel CNN-based segmentation algorithm onto the bi-i system. The experimental results, carried out for different benchmark video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frame/sec. Comparisons with existing CNN-based methods show that, even though these methods are from two to six times faster than the proposed one, the conceived approach is more accurate and, consequently, represents a satisfying trade-off between real-time requirements and accuracy.

  15. Robust Non-Local Multi-Atlas Segmentation of the Optic Nerve.

    PubMed

    Asman, Andrew J; Delisi, Michael P; Mawn; Galloway, Robert L; Landman, Bennett A

    2013-03-13

    Labeling or segmentation of structures of interest on medical images plays an essential role in both clinical and scientific understanding of the biological etiology, progression, and recurrence of pathological disorders. Here, we focus on the optic nerve, a structure that plays a critical role in many devastating pathological conditions - including glaucoma, ischemic neuropathy, optic neuritis and multiple-sclerosis. Ideally, existing fully automated procedures would result in accurate and robust segmentation of the optic nerve anatomy. However, current segmentation procedures often require manual intervention due to anatomical and imaging variability. Herein, we propose a framework for robust and fully-automated segmentation of the optic nerve anatomy. First, we provide a robust registration procedure that results in consistent registrations, despite highly varying data in terms of voxel resolution and image field-of-view. Additionally, we demonstrate the efficacy of a recently proposed non-local label fusion algorithm that accounts for small scale errors in registration correspondence. On a dataset consisting of 31 highly varying computed tomography (CT) images of the human brain, we demonstrate that the proposed framework consistently results in accurate segmentations. In particular, we show (1) that the proposed registration procedure results in robust registrations of the optic nerve anatomy, and (2) that the non-local statistical fusion algorithm significantly outperforms several of the state-of-the-art label fusion algorithms. PMID:24478826

  16. A novel method for retinal exudate segmentation using signal separation algorithm.

    PubMed

    Imani, Elaheh; Pourreza, Hamid-Reza

    2016-09-01

    Diabetic retinopathy is one of the major causes of blindness in the world. Early diagnosis of this disease is vital to the prevention of visual loss. The analysis of retinal lesions such as exudates, microaneurysms and hemorrhages is a prerequisite to detect diabetic disorders such as diabetic retinopathy and macular edema in fundus images. This paper presents an automatic method for the detection of retinal exudates. The novelty of this method lies in the use of Morphological Component Analysis (MCA) algorithm to separate lesions from normal retinal structures to facilitate the detection process. In the first stage, vessels are separated from lesions using the MCA algorithm with appropriate dictionaries. Then, the lesion part of retinal image is prepared for the detection of exudate regions. The final exudate map is created using dynamic thresholding and mathematical morphologies. Performance of the proposed method is measured on the three publicly available DiaretDB, HEI-MED and e-ophtha datasets. Accordingly, the AUC of 0.961 and 0.948 and 0.937 is achieved respectively, which are greater than most of the state-of-the-art methods. PMID:27393810

  17. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    PubMed Central

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  18. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  19. Segmentation and image navigation in digitized spine x rays

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    2000-06-01

    The National Library of Medicine has archived a collection of 17,000 digitized x-rays of the cervical and lumbar spines. Extensive health information has been collected on the subjects of these x-rays, but no information has been derived from the image contents themselves. We are researching algorithms to segment anatomy in these images and to derive from the segmented data measurements useful for indexing this image set for characteristics important to researchers in rheumatology, bone morphometry, and related areas. Active Shape Modeling is currently being investigated for use in location and boundary definition for the vertebrae in these images.

  20. The Anatomy of Learning Anatomy

    ERIC Educational Resources Information Center

    Wilhelmsson, Niklas; Dahlgren, Lars Owe; Hult, Hakan; Scheja, Max; Lonka, Kirsti; Josephson, Anna

    2010-01-01

    The experience of clinical teachers as well as research results about senior medical students' understanding of basic science concepts has much been debated. To gain a better understanding about how this knowledge-transformation is managed by medical students, this work aims at investigating their ways of setting about learning anatomy.…

  1. Validation and Development of a New Automatic Algorithm for Time-Resolved Segmentation of the Left Ventricle in Magnetic Resonance Imaging

    PubMed Central

    Tufvesson, Jane; Hedström, Erik; Steding-Ehrenborg, Katarina; Carlsson, Marcus; Arheden, Håkan; Heiberg, Einar

    2015-01-01

    Introduction. Manual delineation of the left ventricle is clinical standard for quantification of cardiovascular magnetic resonance images despite being time consuming and observer dependent. Previous automatic methods generally do not account for one major contributor to stroke volume, the long-axis motion. Therefore, the aim of this study was to develop and validate an automatic algorithm for time-resolved segmentation covering the whole left ventricle, including basal slices affected by long-axis motion. Methods. Ninety subjects imaged with a cine balanced steady state free precession sequence were included in the study (training set n = 40, test set n = 50). Manual delineation was reference standard and second observer analysis was performed in a subset (n = 25). The automatic algorithm uses deformable model with expectation-maximization, followed by automatic removal of papillary muscles and detection of the outflow tract. Results. The mean differences between automatic segmentation and manual delineation were EDV −11 mL, ESV 1 mL, EF −3%, and LVM 4 g in the test set. Conclusions. The automatic LV segmentation algorithm reached accuracy comparable to interobserver for manual delineation, thereby bringing automatic segmentation one step closer to clinical routine. The algorithm and all images with manual delineations are available for benchmarking. PMID:26180818

  2. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  3. Active Segmentation

    PubMed Central

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary. We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach. PMID:20686671

  4. A modified Seeded Region Growing algorithm for vessel segmentation in breast MRI images for investigating the nature of potential lesions

    NASA Astrophysics Data System (ADS)

    Glotsos, D.; Vassiou, K.; Kostopoulos, S.; Lavdas, El; Kalatzis, I.; Asvestas, P.; Arvanitis, D. L.; Fezoulidis, I. V.; Cavouras, D.

    2014-03-01

    The role of Magnetic Resonance Imaging (MRI) as an alternative protocol for screening of breast cancer has been intensively investigated during the past decade. Preliminary research results have indicated that gadolinium-agent administrative MRI scans may reveal the nature of breast lesions by analyzing the contrast-agent's uptake time. In this study, we attempt to deduce the same conclusion, however, from a different perspective by investigating, using image processing, the vascular network of the breast at two different time intervals following the administration of gadolinium. Twenty cases obtained from a 3.0-T MRI system (SIGNA HDx; GE Healthcare) were included in the study. A new modification of the Seeded Region Growing (SRG) algorithm was used to segment vessels from surrounding background. Delineated vessels were investigated by means of their topology, morphology and texture. Results have shown that it is possible to estimate the nature of the lesions with approximately 94.4% accuracy, thus, it may be claimed that the breast vascular network does encodes useful, patterned, information, which can be used for characterizing breast lesions.

  5. The effects of changing water content, relaxation times, and tissue contrast on tissue segmentation and measures of cortical anatomy in MR images.

    PubMed

    Bansal, Ravi; Hao, Xuejun; Liu, Feng; Xu, Dongrong; Liu, Jun; Peterson, Bradley S

    2013-12-01

    Water content is the dominant chemical compound in the brain and it is the primary determinant of tissue contrast in magnetic resonance (MR) images. Water content varies greatly between individuals, and it changes dramatically over time from birth through senescence of the human life span. We hypothesize that the effects that individual- and age-related variations in water content have on contrast of the brain in MR images also have important, systematic effects on in vivo, MRI-based measures of regional brain volumes. We also hypothesize that changes in water content and tissue contrast across time may account for age-related changes in regional volumes, and that differences in water content or tissue contrast across differing neuropsychiatric diagnoses may account for differences in regional volumes across diagnostic groups. We demonstrate in several complementary ways that subtle variations in water content across age and tissue compartments alter tissue contrast, and that changing tissue contrast in turn alters measures of the thickness and volume of the cortical mantle: (1) We derive analytic relations describing how age-related changes in tissue relaxation times produce age-related changes in tissue gray-scale intensity values and tissue contrast; (2) We vary tissue contrast in computer-generated images to assess its effects on tissue segmentation and volumes of gray matter and white matter; and (3) We use real-world imaging data from adults with either Schizophrenia or Bipolar Disorder and age- and sex-matched healthy adults to assess the ways in which variations in tissue contrast across diagnoses affects group differences in tissue segmentation and associated volumes. We conclude that in vivo MRI-based morphological measures of the brain, including regional volumes and measures of cortical thickness, are a product of, or at least are confounded by, differences in tissue contrast across individuals, ages, and diagnostic groups, and that differences in

  6. Normal Pancreas Anatomy

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Pancreas Anatomy Add to My Pictures View /Download : Small: ... 1586x1534 View Download Large: 3172x3068 View Download Title: Pancreas Anatomy Description: Anatomy of the pancreas; drawing shows ...

  7. Normal Female Reproductive Anatomy

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Reproductive System, Female, Anatomy Add to My Pictures View /Download : Small: ... Reproductive System, Female, Anatomy Description: Anatomy of the female reproductive system; drawing shows the uterus, myometrium (muscular outer layer ...

  8. Thymus Gland Anatomy

    MedlinePlus

    ... historical Searches are case-insensitive Thymus Gland, Adult, Anatomy Add to My Pictures View /Download : Small: 720x576 ... Large: 3000x2400 View Download Title: Thymus Gland, Adult, Anatomy Description: Anatomy of the thymus gland; drawing shows ...

  9. Applying an Open-Source Segmentation Algorithm to Different OCT Devices in Multiple Sclerosis Patients and Healthy Controls: Implications for Clinical Trials.

    PubMed

    Bhargava, Pavan; Lang, Andrew; Al-Louzi, Omar; Carass, Aaron; Prince, Jerry; Calabresi, Peter A; Saidha, Shiv

    2015-01-01

    Background. The lack of segmentation algorithms operative across optical coherence tomography (OCT) platforms hinders utility of retinal layer measures in MS trials. Objective. To determine cross-sectional and longitudinal agreement of retinal layer thicknesses derived from an open-source, fully-automated, segmentation algorithm, applied to two spectral-domain OCT devices. Methods. Cirrus HD-OCT and Spectralis OCT macular scans from 68 MS patients and 22 healthy controls were segmented. A longitudinal cohort comprising 51 subjects (mean follow-up: 1.4 ± 0.9 years) was also examined. Bland-Altman analyses and interscanner agreement indices were utilized to assess agreement between scanners. Results. Low mean differences (-2.16 to 0.26 μm) and narrow limits of agreement (LOA) were noted for ganglion cell and inner and outer nuclear layer thicknesses cross-sectionally. Longitudinally we found low mean differences (-0.195 to 0.21 μm) for changes in all layers, with wider LOA. Comparisons of rate of change in layer thicknesses over time revealed consistent results between the platforms. Conclusions. Retinal thickness measures for the majority of the retinal layers agree well cross-sectionally and longitudinally between the two scanners at the cohort level, with greater variability at the individual level. This open-source segmentation algorithm enables combining data from different OCT platforms, broadening utilization of OCT as an outcome measure in MS trials. PMID:26090228

  10. Applying an Open-Source Segmentation Algorithm to Different OCT Devices in Multiple Sclerosis Patients and Healthy Controls: Implications for Clinical Trials

    PubMed Central

    Lang, Andrew; Al-Louzi, Omar; Carass, Aaron; Prince, Jerry; Calabresi, Peter A.; Saidha, Shiv

    2015-01-01

    Background. The lack of segmentation algorithms operative across optical coherence tomography (OCT) platforms hinders utility of retinal layer measures in MS trials. Objective. To determine cross-sectional and longitudinal agreement of retinal layer thicknesses derived from an open-source, fully-automated, segmentation algorithm, applied to two spectral-domain OCT devices. Methods. Cirrus HD-OCT and Spectralis OCT macular scans from 68 MS patients and 22 healthy controls were segmented. A longitudinal cohort comprising 51 subjects (mean follow-up: 1.4 ± 0.9 years) was also examined. Bland-Altman analyses and interscanner agreement indices were utilized to assess agreement between scanners. Results. Low mean differences (−2.16 to 0.26 μm) and narrow limits of agreement (LOA) were noted for ganglion cell and inner and outer nuclear layer thicknesses cross-sectionally. Longitudinally we found low mean differences (−0.195 to 0.21 μm) for changes in all layers, with wider LOA. Comparisons of rate of change in layer thicknesses over time revealed consistent results between the platforms. Conclusions. Retinal thickness measures for the majority of the retinal layers agree well cross-sectionally and longitudinally between the two scanners at the cohort level, with greater variability at the individual level. This open-source segmentation algorithm enables combining data from different OCT platforms, broadening utilization of OCT as an outcome measure in MS trials. PMID:26090228

  11. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    NASA Astrophysics Data System (ADS)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  12. White matter lesion segmentation using machine learning and weakly labeled MR images

    NASA Astrophysics Data System (ADS)

    Xie, Yuchen; Tao, Xiaodong

    2011-03-01

    We propose a fast, learning-based algorithm for segmenting white matter (WM) lesions for magnetic resonance (MR) brain images. The inputs to the algorithm are T1, T2, and FLAIR images. Unlike most of the previously reported learning-based algorithms, which treat expert labeled lesion map as ground truth in the training step, the proposed algorithm only requires the user to provide a few regions of interest (ROI's) containing lesions. An unsupervised clustering algorithm is applied to segment these ROI's into areas. Based on the assumption that lesion voxels have higher intensity on FLAIR image, areas corresponding to lesions are identified and their probability distributions in T1, T2, and FLAIR images are computed. The lesion segmentation in 3D is done by using the probability distributions to generate a confidence map of lesion and applying a graph based segmentation algorithm to label lesion voxels. The initial lesion label is used to further refine the probability distribution estimation for the final lesion segmentation. The advantages of the proposed algorithm are: 1. By using the weak labels, we reduced the dependency of the segmentation performance on the expert discrimination of lesion voxels in the training samples; 2. The training can be done using labels generated by users with only general knowledge of brain anatomy and image characteristics of WM lesion, instead of these carefully labeled by experienced radiologists; 3. The algorithm is fast enough to make interactive segmentation possible. We test the algorithm on nine ACCORD-MIND MRI datasets. Experimental results show that our algorithm agrees well with expert labels and outperforms a support vector machine based WM lesion segmentation algorithm.

  13. Regulatory Anatomy

    PubMed Central

    2015-01-01

    This article proposes the term “safety logics” to understand attempts within the European Union (EU) to harmonize member state legislation to ensure a safe and stable supply of human biological material for transplants and transfusions. With safety logics, I refer to assemblages of discourses, legal documents, technological devices, organizational structures, and work practices aimed at minimizing risk. I use this term to reorient the analytical attention with respect to safety regulation. Instead of evaluating whether safety is achieved, the point is to explore the types of “safety” produced through these logics as well as to consider the sometimes unintended consequences of such safety work. In fact, the EU rules have been giving rise to complaints from practitioners finding the directives problematic and inadequate. In this article, I explore the problems practitioners face and why they arise. In short, I expose the regulatory anatomy of the policy landscape. PMID:26139952

  14. Discuss on the two algorithms of line-segments and dot-array for region judgement of the sub-satellite purview

    NASA Astrophysics Data System (ADS)

    Nie, Hao; Yang, Mingming; Zhu, Yajie; Zhang, Peng

    2015-04-01

    When satellite is flying on the orbit for special task like solar flare observation, it requires knowing if the sub-satellite purview was in the ocean area. The relative position between sub-satellite point and the coastline is varying, so the observation condition need be judged in real time according to the current orbital elements. The problem is to solve the status of the relative position between the rectangle purview and the multi connected regions formed by the base data of coastline. Usually the Cohen-Sutherland algorithm is adopted to get the status. It divides the earth map to 9 sections by the four lines extended the rectangle sides. Then the coordinate of boundary points of the connected regions in which section should be confirmed. That method traverses all the boundary points for each judgement. In this paper, two algorithms are presented. The one is based on line-segments, another is based on dot-array. And the data preprocessing and judging procedure of the two methods are focused. The peculiarity of two methods is also analyzed. The method of line-segments treats the connected regions as a set of series line segments. In order to solve the problem, the terminals' coordinates of the rectangle purview and the line segments at the same latitude are compared. The method of dot-array translates the whole map to a binary image, which can be equal to a dot array. The value set of the sequence pixels in the dot array is gained. The value of the pixels in the rectangle purview is judged to solve the problem. Those two algorithms consume lower soft resource, and reduce much more comparing times because both of them do not need traverse all the boundary points. The analysis indicates that the real-time performance and consumed resource of the two algorithms are similar for the simple coastline, but the method of dot-array is the choice when coastline is quite complicated.

  15. Combining split-and-merge and multi-seed region growing algorithms for uterine fibroid segmentation in MRgFUS treatments.

    PubMed

    Rundo, Leonardo; Militello, Carmelo; Vitabile, Salvatore; Casarino, Carlo; Russo, Giorgio; Midiri, Massimo; Gilardi, Maria Carla

    2016-07-01

    Uterine fibroids are benign tumors that can affect female patients during reproductive years. Magnetic resonance-guided focused ultrasound (MRgFUS) represents a noninvasive approach that uses thermal ablation principles to treat symptomatic fibroids. During traditional treatment planning, uterus, fibroids, and surrounding organs at risk must be manually marked on MR images by an operator. After treatment, an operator must segment, again manually, treated areas to evaluate the non-perfused volume (NPV) inside the fibroids. Both pre- and post-treatment procedures are time-consuming and operator-dependent. This paper presents a novel method, based on an advanced direct region detection model, for fibroid segmentation in MR images to address MRgFUS post-treatment segmentation issues. An incremental procedure is proposed: split-and-merge algorithm results are employed as multiple seed-region selections by an adaptive region growing procedure. The proposed approach segments multiple fibroids with different pixel intensity, even in the same MR image. The method was evaluated using area-based and distance-based metrics and was compared with other similar works in the literature. Segmentation results, performed on 14 patients, demonstrated the effectiveness of the proposed approach showing a sensitivity of 84.05 %, a specificity of 92.84 %, and a speedup factor of 1.56× with respect to classic region growing implementations (average values). PMID:26530047

  16. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  17. Improving Cerebellar Segmentation with Statistical Fusion

    PubMed Central

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-01-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multi-atlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non-Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution. PMID:27127334

  18. Algorithme d'optimisation du profil vertical pour un segment de vol en croisiere avec une contrainte d'heure d'arrivee requise

    NASA Astrophysics Data System (ADS)

    Dancila, Radu Ioan

    This thesis presents the development of an algorithm that determines the optimal vertical navigation (VNAV) profile for an aircraft flying a cruise segment, along a given lateral navigation (LNAV) profile, with a required time of arrival (RTA) constraint. The algorithm is intended for implementation into a Flight Management System (FMS) as a new feature that gives advisory information regarding the optimal VNAV profile. The optimization objective is to minimize the total cost associated with flying the cruise segment while arriving at the end of the segment within an imposed time window. For the vertical navigation profiles yielding a time of arrival within the imposed limits, the degree of fulfillment of the RTA constraint is quantified by a cost proportional with the absolute value of the difference between the actual time of arrival and the RTA. The VNAV profiles evaluated in this thesis are characterized by identical altitudes at the beginning and at the end of the profile, they have no more than one step altitude and are flown at constant speed. The acceleration and deceleration segments are not taken into account. The altitude and speed ranges to be used for the VNAV profiles are specified as input parameters for the algorithm. The algorithm described in this thesis is developed in MATLAB. At each altitude, in the range of altitudes considered for the VNAV profiles, a binary search is performed in order to identify the speed interval that yields a time of arrival compatible with the RTA constraint and the profile that produces a minimum total cost is retained. The performance parameters that determine the total cost for flying a particular VNAV profile, the fuel burn and the flight time, are calculated based on the aircraft's specific performance data and configuration, climb/descent profile, the altitude at the beginning of the VNAV profile, the VNAV and LNAV profiles and the atmospheric conditions. These calculations were validated using data generated by a

  19. Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans

    PubMed Central

    Reda, Fitsum A.; Noble, Jack H.; Rivas, Alejandro; McRackan, Theodore R.; Labadie, Robert F.; Dawant, Benoit M.

    2011-01-01

    Purpose: Cochlear implant surgery is used to implant an electrode array in the cochlea to treat hearing loss. The authors recently introduced a minimally invasive image-guided technique termed percutaneous cochlear implantation. This approach achieves access to the cochlea by drilling a single linear channel from the outer skull into the cochlea via the facial recess, a region bounded by the facial nerve and chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The goal of this work is to automatically segment the facial nerve and chorda tympani in pediatric CT scans. Methods: The authors have proposed an automatic technique to achieve the segmentation task in adult patients that relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work, the authors attempted to use the same method to segment the structures in pediatric scans. However, the authors learned that substantial differences exist between the anatomy of children and that of adults, which led to poor segmentation results when an adult model is used to segment a pediatric volume. Therefore, the authors built a new model for pediatric cases and used it to segment pediatric scans. Once this new model was built, the authors employed the same segmentation method used for adults with algorithm parameters that were optimized for pediatric anatomy. Results: A validation experiment was conducted on 10 CT scans in which manually segmented structures were compared to automatically segmented structures. The mean, standard deviation, median, and maximum segmentation errors were 0.23, 0.17, 0.18, and 1.27 mm, respectively. Conclusions: The results indicate that accurate segmentation of the facial nerve and chorda tympani in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed

  20. Quick Dissection of the Segmental Bronchi

    ERIC Educational Resources Information Center

    Nakajima, Yuji

    2010-01-01

    Knowledge of the three-dimensional anatomy of the bronchopulmonary segments is essential for respiratory medicine. This report describes a quick guide for dissecting the segmental bronchi in formaldehyde-fixed human material. All segmental bronchi are easy to dissect, and thus, this exercise will help medical students to better understand the…

  1. Anatomy of the Eye

    MedlinePlus

    ... Conditions Frequently Asked Questions Español Condiciones Chinese Conditions Anatomy of the Eye En Español Read in Chinese External (Extraocular) Anatomy Extraocular Muscles: There are six muscles that are ...

  2. Automated lung segmentation of low resolution CT scans of rats

    NASA Astrophysics Data System (ADS)

    Rizzo, Benjamin M.; Haworth, Steven T.; Clough, Anne V.

    2014-03-01

    Dual modality micro-CT and SPECT imaging can play an important role in preclinical studies designed to investigate mechanisms, progression, and therapies for acute lung injury in rats. SPECT imaging involves examining the uptake of radiopharmaceuticals within the lung, with the hypothesis that uptake is sensitive to the health or disease status of the lung tissue. Methods of quantifying lung uptake and comparison of right and left lung uptake generally begin with identifying and segmenting the lung region within the 3D reconstructed SPECT volume. However, identification of the lung boundaries and the fissure between the left and right lung is not always possible from the SPECT images directly since the radiopharmaceutical may be taken up by other surrounding tissues. Thus, our SPECT protocol begins with a fast CT scan, the lung boundaries are identified from the CT volume, and the CT region is coregistered with the SPECT volume to obtain the SPECT lung region. Segmenting rat lungs within the CT volume is particularly challenging due to the relatively low resolution of the images and the rat's unique anatomy. Thus, we have developed an automated segmentation algorithm for low resolution micro-CT scans that utilizes depth maps to detect fissures on the surface of the lung volume. The fissure's surface location is in turn used to interpolate the fissure throughout the lung volume. Results indicate that the segmentation method results in left and right lung regions consistent with rat lung anatomy.

  3. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. PMID:22874883

  4. Algorithm for localized adaptive diffuse optical tomography and its application in bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Naser, Mohamed A.; Patterson, Michael S.; Wong, John W.

    2014-04-01

    A reconstruction algorithm for diffuse optical tomography based on diffusion theory and finite element method is described. The algorithm reconstructs the optical properties in a permissible domain or region-of-interest to reduce the number of unknowns. The algorithm can be used to reconstruct optical properties for a segmented object (where a CT-scan or MRI is available) or a non-segmented object. For the latter, an adaptive segmentation algorithm merges contiguous regions with similar optical properties thereby reducing the number of unknowns. In calculating the Jacobian matrix the algorithm uses an efficient direct method so the required time is comparable to that needed for a single forward calculation. The reconstructed optical properties using segmented, non-segmented, and adaptively segmented 3D mouse anatomy (MOBY) are used to perform bioluminescence tomography (BLT) for two simulated internal sources. The BLT results suggest that the accuracy of reconstruction of total source power obtained without the segmentation provided by an auxiliary imaging method such as x-ray CT is comparable to that obtained when using perfect segmentation.

  5. GPU-based relative fuzzy connectedness image segmentation

    SciTech Connect

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  6. GPU-based relative fuzzy connectedness image segmentation

    PubMed Central

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  7. Segmentation of diesel spray images with log-likelihood ratio test algorithm for non-Gaussian distributions.

    PubMed

    Pastor, José V; Arrègle, Jean; García, José M; Zapata, L Daniel

    2007-02-20

    A methodology for processing images of diesel sprays under different experimental situations is presented. The new approach has been developed for cases where the background does not follow a Gaussian distribution but a positive bias appears. In such cases, the lognormal and the gamma probability density functions have been considered for the background digital level distributions. Two different algorithms have been compared with the standard log-likelihood ratio test (LRT): a threshold defined from the cumulative probability density function of the background shows a sensitive improvement, but the best results are obtained with modified versions of the LRT algorithm adapted to non-Gaussian cases. PMID:17279134

  8. Segmentation of the ovine lung in 3D CT Images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  9. A Segmentation Algorithm for Quantitative Analysis of Heterogeneous Tumors of the Cervix With ¹⁸F-FDG PET/CT.

    PubMed

    Mu, Wei; Chen, Zhe; Shen, Wei; Yang, Feng; Liang, Ying; Dai, Ruwei; Wu, Ning; Tian, Jie

    2015-10-01

    As positron-emission tomography (PET) images have low spatial resolution and much noise, accurate image segmentation is one of the most challenging issues in tumor quantification. Tumors of the uterine cervix present a particular challenge because of urine activity in the adjacent bladder. Here, we propose and validate an automatic segmentation method adapted to cervical tumors. Our proposed methodology combined the gradient field information of both the filtered PET image and the level set function into a level set framework by constructing a new evolution equation. Furthermore, we also constructed a new hyperimage to recognize a rough tumor region using the fuzzy c-means algorithm according to the tissue specificity as defined by both PET (uptake) and computed tomography (attenuation) to provide the initial zero level set, which could make the segmentation process fully automatic. The proposed method was verified based on simulation and clinical studies. For simulation studies, seven different phantoms, representing tumors with homogenous/heterogeneous-low/high uptake patterns and different volumes, were simulated with five different noise levels. Twenty-seven cervical cancer patients at different stages were enrolled for clinical evaluation of the method. Dice similarity coefficients (DSC) and Hausdorff distance (HD) were used to evaluate the accuracy of the segmentation method, while a Bland-Altman analysis of the mean standardized uptake value (SUVmean) and metabolic tumor volume (MTV) was used to evaluate the accuracy of the quantification. Using this method, the DSCs and HDs of the homogenous and heterogeneous phantoms under clinical noise level were 93.39 ±1.09% and 6.02 ±1.09 mm, 93.59 ±1.63% and 8.92 ±2.57 mm, respectively. The DSCs and HDs in patients measured 91.80 ±2.46% and 7.79 ±2.18 mm. Through Bland-Altman analysis, the SUVmean and the MTV using our method showed high correlation with the clinical gold standard. The results of both simulation

  10. Intensity-Based Skeletonization of CryoEM Gray-Scale Images Using a True Segmentation-Free Algorithm

    PubMed Central

    Nasr, Kamal Al; Liu, Chunmei; Rwebangira, Mugizi; Burge, Legand; He, Jing

    2014-01-01

    Cryo-electron microscopy is an experimental technique that is able to produce 3D gray-scale images of protein molecules. In contrast to other experimental techniques, cryo-electron microscopy is capable of visualizing large molecular complexes such as viruses and ribosomes. At medium resolution, the positions of the atoms are not visible and the process cannot proceed. The medium-resolution images produced by cryo-electron microscopy are used to derive the atomic structure of the proteins in de novo modeling. The skeletons of the 3D gray-scale images are used to interpret important information that is helpful in de novo modeling. Unfortunately, not all features of the image can be captured using a single segmentation. In this paper, we present a segmentation-free approach to extract the gray-scale curve-like skeletons. The approach relies on a novel representation of the 3D image, where the image is modeled as a graph and a set of volume trees. A test containing 36 synthesized maps and one authentic map shows that our approach can improve the performance of the two tested tools used in de novo modeling. The improvements were 62 and 13 percent for Gorgon and DP-TOSS, respectively. PMID:24384713

  11. Quantification of Right and Left Ventricular Function in Cardiac MR Imaging: Comparison of Semiautomatic and Manual Segmentation Algorithms

    PubMed Central

    Souto, Miguel; Masip, Lambert Raul; Couto, Miguel; Suárez-Cuenca, Jorge Juan; Martínez, Amparo; Tahoces, Pablo G.; Carreira, Jose Martin; Croisille, Pierre

    2013-01-01

    The purpose of this study was to evaluate the performance of a semiautomatic segmentation method for the anatomical and functional assessment of both ventricles from cardiac cine magnetic resonance (MR) examinations, reducing user interaction to a “mouse-click”. Fifty-two patients with cardiovascular diseases were examined using a 1.5-T MR imaging unit. Several parameters of both ventricles, such as end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (EF), were quantified by an experienced operator using the conventional method based on manually-defined contours, as the standard of reference; and a novel semiautomatic segmentation method based on edge detection, iterative thresholding and region growing techniques, for evaluation purposes. No statistically significant differences were found between the two measurement values obtained for each parameter (p > 0.05). Correlation to estimate right ventricular function was good (r > 0.8) and turned out to be excellent (r > 0.9) for the left ventricle (LV). Bland-Altman plots revealed acceptable limits of agreement between the two methods (95%). Our study findings indicate that the proposed technique allows a fast and accurate assessment of both ventricles. However, further improvements are needed to equal results achieved for the right ventricle (RV) using the conventional methodology. PMID:26835680

  12. Segmentation of the whole breast from low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Salvatore, Mary; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    The segmentation of whole breast serves as the first step towards automated breast lesion detection. It is also necessary for automatically assessing the breast density, which is considered to be an important risk factor for breast cancer. In this paper we present a fully automated algorithm to segment the whole breast in low-dose chest CT images (LDCT), which has been recommended as an annual lung cancer screening test. The automated whole breast segmentation and potential breast density readings as well as lesion detection in LDCT will provide useful information for women who have received LDCT screening, especially the ones who have not undergone mammographic screening, by providing them additional risk indicators for breast cancer with no additional radiation exposure. The two main challenges to be addressed are significant range of variations in terms of the shape and location of the breast in LDCT and the separation of pectoral muscles from the glandular tissues. The presented algorithm achieves robust whole breast segmentation using an anatomy directed rule-based method. The evaluation is performed on 20 LDCT scans by comparing the segmentation with ground truth manually annotated by a radiologist on one axial slice and two sagittal slices for each scan. The resulting average Dice coefficient is 0.880 with a standard deviation of 0.058, demonstrating that the automated segmentation algorithm achieves results consistent with manual annotations of a radiologist.

  13. Computerized segmentation algorithm with personalized atlases of murine MRIs in a SV40 large T-antigen mouse mammary cancer model

    NASA Astrophysics Data System (ADS)

    Sibley, Adam R.; Markiewicz, Erica; Mustafi, Devkumar; Fan, Xiaobing; Conzen, Suzanne; Karczmar, Greg; Giger, Maryellen L.

    2016-03-01

    Quantities of MRI data, much larger than can be objectively and efficiently analyzed manually, are routinely generated in preclinical research. We aim to develop an automated image segmentation and registration pipeline to aid in analysis of image data from our high-throughput 9.4 Tesla small animal MRI imaging center. T2-weighted, fat-suppressed MRIs were acquired over 4 life-cycle time-points [up to 12 to 18 weeks] of twelve C3(1) SV40 Large T-antigen mice for a total of 46 T2-weighted MRI volumes; each with a matrix size of 192 x 256, 62 slices, in plane resolution 0.1 mm, and slice thickness 0.5 mm. These image sets were acquired with the goal of tracking and quantifying progression of mammary intraepithelial neoplasia (MIN) to invasive cancer in mice, believed to be similar to ductal carcinoma in situ (DCIS) in humans. Our segmentation algorithm takes 2D seed-points drawn by the user at the center of the 4 co-registered volumes associated with each mouse. The level set then evolves in 3D from these 2D seeds. The contour evolution incorporates texture information, edge information, and a statistical shape model in a two-step process. Volumetric DICE coefficients comparing the automatic with manual segmentations were computed and ranged between 0.75 and 0.58 for averages over the 4 life-cycle time points of the mice. Incorporation of these personalized atlases with intra and inter mouse registration is expected to enable locally and globally tracking of the morphological and textural changes in the mammary tissue and associated lesions of these mice.

  14. Optimization of automated segmentation of monkeypox virus-induced lung lesions from normal lung CT images using hard C-means algorithm

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie

    2013-03-01

    Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.

  15. Clinical anatomy of the subserous layer: An amalgamation of gross and clinical anatomy.

    PubMed

    Yabuki, Yoshihiko

    2016-05-01

    The 1998 edition of Terminologia Anatomica introduced some currently used clinical anatomical terms for the pelvic connective tissue or subserous layer. These innovations persuaded the present author to consider a format in which the clinical anatomical terms could be reconciled with those of gross anatomy and incorporated into a single anatomical glossary without contradiction or ambiguity. Specific studies on the subserous layer were undertaken on 79 Japanese women who had undergone surgery for uterine cervical cancer, and on 26 female cadavers that were dissected, 17 being formalin-fixed and 9 fresh. The results were as follows: (a) the subserous layer could be segmentalized by surgical dissection in the perpendicular, horizontal and sagittal planes; (b) the segmentalized subserous layer corresponded to 12 cubes, or ligaments, of minimal dimension that enabled the pelvic organs to be extirpated; (c) each ligament had a three-dimensional (3D) structure comprising craniocaudal, mediolateral, and dorsoventral directions vis-á-vis the pelvic axis; (d) these 3D-structured ligaments were encoded morphologically in order of decreasing length; and (e) using these codes, all the surgical procedures for 19th century to present-day radical hysterectomy could be expressed symbolically. The establishment of clinical anatomical terms, represented symbolically through coding as demonstrated in this article, could provide common ground for amalgamating clinical anatomy with gross anatomy. Consequently, terms in clinical anatomy and gross anatomy could be reconciled and compiled into a single anatomical glossary. Clin. Anat. 29:508-515, 2016. © 2015 Wiley Periodicals, Inc. PMID:26621479

  16. Multi-Atlas Segmentation for Abdominal Organs with Gaussian Mixture Models

    PubMed Central

    Burke, Ryan P.; Xu, Zhoubing; Lee, Christopher P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.

    2015-01-01

    Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid/gray matter/white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation. PMID:25914508

  17. Multi-atlas segmentation for abdominal organs with Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Burke, Ryan P.; Xu, Zhoubing; Lee, Christopher P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.

    2015-03-01

    Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid / gray matter / white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.

  18. Improving the robustness of interventional 4D ultrasound segmentation through the use of personalized prior shape models

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Queirós, Sandro; Morais, Pedro; Baptista, Maria J.; Monaghan, Mark; Rodrigues, Nuno F.; D'hooge, Jan; Vilaça, João. L.

    2015-03-01

    While fluoroscopy is still the most widely used imaging modality to guide cardiac interventions, the fusion of pre-operative Magnetic Resonance Imaging (MRI) with real-time intra-operative ultrasound (US) is rapidly gaining clinical acceptance as a viable, radiation-free alternative. In order to improve the detection of the left ventricular (LV) surface in 4D ultrasound, we propose to take advantage of the pre-operative MRI scans to extract a realistic geometrical model representing the patients cardiac anatomy. This could serve as prior information in the interventional setting, allowing to increase the accuracy of the anatomy extraction step in US data. We have made use of a real-time 3D segmentation framework used in the recent past to solve the LV segmentation problem in MR and US data independently and we take advantage of this common link to introduce the prior information as a soft penalty term in the ultrasound segmentation algorithm. We tested the proposed algorithm in a clinical dataset of 38 patients undergoing both MR and US scans. The introduction of the personalized shape prior improves the accuracy and robustness of the LV segmentation, as supported by the error reduction when compared to core lab manual segmentation of the same US sequences.

  19. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    SciTech Connect

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  20. Anatomy Comic Strips

    ERIC Educational Resources Information Center

    Park, Jin Seo; Kim, Dae Hyun; Chung, Min Suk

    2011-01-01

    Comics are powerful visual messages that convey immediate visceral meaning in ways that conventional texts often cannot. This article's authors created comic strips to teach anatomy more interestingly and effectively. Four-frame comic strips were conceptualized from a set of anatomy-related humorous stories gathered from the authors' collective…

  1. Anatomy: Spotlight on Africa

    ERIC Educational Resources Information Center

    Kramer, Beverley; Pather, Nalini; Ihunwo, Amadi O.

    2008-01-01

    Anatomy departments across Africa were surveyed regarding the type of curriculum and method of delivery of their medical courses. While the response rate was low, African anatomy departments appear to be in line with the rest of the world in that many have introduced problem based learning, have hours that are within the range of western medical…

  2. Robust optic nerve segmentation on clinically acquired CT

    NASA Astrophysics Data System (ADS)

    Panda, Swetasudha; Asman, Andrew J.; DeLisi, Michael P.; Mawn, Louise A.; Galloway, Robert L.; Landman, Bennett A.

    2014-03-01

    The optic nerve is a sensitive central nervous system structure, which plays a critical role in many devastating pathological conditions. Several methods have been proposed in recent years to segment the optic nerve automatically, but progress toward full automation has been limited. Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. Herein we evaluate a framework for robust and fully automated segmentation of the optic nerves, eye globes and muscles. We employ a robust registration procedure for accurate registrations, variable voxel resolution and image fieldof- view. We demonstrate the efficacy of an optimal combination of SyN registration and a recently proposed label fusion algorithm (Non-local Spatial STAPLE) that accounts for small-scale errors in registration correspondence. On a dataset containing 30 highly varying computed tomography (CT) images of the human brain, the optimal registration and label fusion pipeline resulted in a median Dice similarity coefficient of 0.77, symmetric mean surface distance error of 0.55 mm, symmetric Hausdorff distance error of 3.33 mm for the optic nerves. Simultaneously, we demonstrate the robustness of the optimal algorithm by segmenting the optic nerve structure in 316 CT scans obtained from 182 subjects from a thyroid eye disease (TED) patient population.

  3. The road surveying system of the federal highway research institute - a performance evaluation of road segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Streiter, R.; Wanielik, G.

    2013-07-01

    The construction of highways and federal roadways is subject to many restrictions and designing rules. The focus is on safety, comfort and smooth driving. Unfortunately, the planning information for roadways and their real constitution, course and their number of lanes and lane widths is often unsure or not available. Due to digital map databases of roads raised much interest during the last years and became one major cornerstone of innovative Advanced Driving Assistance Systems (ADASs), the demand for accurate and detailed road information increases considerably. Within this project a measurement system for collecting high accurate road data was developed. This paper gives an overview about the sensor configuration within the measurement vehicle, introduces the implemented algorithms and shows some applications implemented in the post processing platform. The aim is to recover the origin parametric description of the roadway and the performance of the measurement system is being evaluated against several original road construction information.

  4. Anatomy comic strips.

    PubMed

    Park, Jin Seo; Kim, Dae Hyun; Chung, Min Suk

    2011-01-01

    Comics are powerful visual messages that convey immediate visceral meaning in ways that conventional texts often cannot. This article's authors created comic strips to teach anatomy more interestingly and effectively. Four-frame comic strips were conceptualized from a set of anatomy-related humorous stories gathered from the authors' collective imagination. The comics were drawn on paper and then recreated with digital graphics software. More than 500 comic strips have been drawn and labeled in Korean language, and some of them have been translated into English. All comic strips can be viewed on the Department of Anatomy homepage at the Ajou University School of Medicine, Suwon, Republic of Korea. The comic strips were written and drawn by experienced anatomists, and responses from viewers have generally been favorable. These anatomy comic strips, designed to help students learn the complexities of anatomy in a straightforward and humorous way, are expected to be improved further by the authors and other interested anatomists. PMID:21634024

  5. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  6. Head segmentation in vertebrates

    PubMed Central

    Kuratani, Shigeru; Schilling, Thomas

    2008-01-01

    Classic theories of vertebrate head segmentation clearly exemplify the idealistic nature of comparative embryology prior to the 20th century. Comparative embryology aimed at recognizing the basic, primary structure that is shared by all vertebrates, either as an archetype or an ancestral developmental pattern. Modern evolutionary developmental (Evo-Devo) studies are also based on comparison, and therefore have a tendency to reduce complex embryonic anatomy into overly simplified patterns. Here again, a basic segmental plan for the head has been sought among chordates. We convened a symposium that brought together leading researchers dealing with this problem, in a number of different evolutionary and developmental contexts. Here we give an overview of the outcome and the status of the field in this modern era of Evo-Devo. We emphasize the fact that the head segmentation problem is not fully resolved, and we discuss new directions in the search for hints for a way out of this maze. PMID:20607135

  7. Skull Base Anatomy.

    PubMed

    Patel, Chirag R; Fernandez-Miranda, Juan C; Wang, Wei-Hsin; Wang, Eric W

    2016-02-01

    The anatomy of the skull base is complex with multiple neurovascular structures in a small space. Understanding all of the intricate relationships begins with understanding the anatomy of the sphenoid bone. The cavernous sinus contains the carotid artery and some of its branches; cranial nerves III, IV, VI, and V1; and transmits venous blood from multiple sources. The anterior skull base extends to the frontal sinus and is important to understand for sinus surgery and sinonasal malignancies. The clivus protects the brainstem and posterior cranial fossa. A thorough appreciation of the anatomy of these various areas allows for endoscopic endonasal approaches to the skull base. PMID:26614826

  8. Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors

    NASA Astrophysics Data System (ADS)

    Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin

    2014-03-01

    One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.

  9. A two-level approach towards semantic colon segmentation: removing extra-colonic findings.

    PubMed

    Lu, Le; Wolf, Matthias; Liang, Jianming; Dundar, Murat; Bi, Jinbo; Salganicoff, Marcos

    2009-01-01

    Computer aided detection (CAD) of colonic polyps in computed tomographic colonography has tremendously impacted colorectal cancer diagnosis using 3D medical imaging. It is a prerequisite for all CAD systems to extract the air-distended colon segments from 3D abdomen computed tomography scans. In this paper, we present a two-level statistical approach of first separating colon segments from small intestine, stomach and other extra-colonic parts by classification on a new geometric feature set; then evaluating the overall performance confidence using distance and geometry statistics over patients. The proposed method is fully automatic and validated using both the classification results in the first level and its numerical impacts on false positive reduction of extra-colonic findings in a CAD system. It shows superior performance than the state-of-art knowledge or anatomy based colon segmentation algorithms. PMID:20426210

  10. Auxiliary anatomical labels for joint segmentation and atlas registration

    NASA Astrophysics Data System (ADS)

    Gass, Tobias; Szekely, Gabor; Goksel, Orcun

    2014-03-01

    This paper studies improving joint segmentation and registration by introducing auxiliary labels for anatomy that has similar appearance to the target anatomy while not being part of that target. Such auxiliary labels help avoid false positive labelling of non-target anatomy by resolving ambiguity. A known registration of a segmented atlas can help identify where a target segmentation should lie. Conversely, segmentations of anatomy in two images can help them be better registered. Joint segmentation and registration is then a method that can leverage information from both registration and segmentation to help one another. It has received increasing attention recently in the literature. Often, merely a single organ of interest is labelled in the atlas. In the presense of other anatomical structures with similar appearance, this leads to ambiguity in intensity based segmentation; for example, when segmenting individual bones in CT images where other bones share the same intensity profile. To alleviate this problem, we introduce automatic generation of additional labels in atlas segmentations, by marking similar-appearance non-target anatomy with an auxiliary label. Information from the auxiliary-labeled atlas segmentation is then incorporated by using a novel coherence potential, which penalizes differences between the deformed atlas segmentation and the target segmentation estimate. We validated this on a joint segmentation-registration approach that iteratively alternates between registering an atlas and segmenting the target image to find a final anatomical segmentation. The results show that automatic auxiliary labelling outperforms the same approach using a single label atlasses, for both mandibular bone segmentation in 3D-CT and corpus callosum segmentation in 2D-MRI.

  11. Comparison of a Gross Anatomy Laboratory to Online Anatomy Software for Teaching Anatomy

    ERIC Educational Resources Information Center

    Mathiowetz, Virgil; Yu, Chih-Huang; Quake-Rapp, Cindee

    2016-01-01

    This study was designed to assess the grades, self-perceived learning, and satisfaction between occupational therapy students who used a gross anatomy laboratory versus online anatomy software (AnatomyTV) as tools to learn anatomy at a large public university and a satellite campus in the mid-western United States. The goal was to determine if…

  12. Anatomy of the Eye

    MedlinePlus

    ... Examinations, Adults Patient Eye Examinations, Children Refractive Errors Scientists in the Laboratory Visual Acuity Testing Anatomy of the Eye × Warning message Automatic fallback to the cURL connection method kicked in to handle the request. Result code ...

  13. Anatomy and art.

    PubMed

    Laios, Konstantinos; Tsoukalas, Gregory; Karamanou, Marianna; Androutsos, George

    2013-01-01

    Leonardo da Vinci, Jean Falcon, Andreas Vesalius, Henry Gray, Henry Vandyke Carter and Frank Netter created some of the best atlases of anatomy. Their works constitute not only scientific medical projects but also masterpieces of art. PMID:24640589

  14. Anatomy of the Brain

    MedlinePlus

    ... our existence. It controls our personality, thoughts, memory, intelligence, speech and understanding, emotions, senses, and basic body functions, as well as how we function in our environment. The diagrams below show brain anatomy, or the various parts of the brain, ...

  15. Image segmentation and registration algorithm to collect thoracic skeleton semilandmarks for characterization of age and sex-based thoracic morphology variation.

    PubMed

    Weaver, Ashley A; Nguyen, Callistus M; Schoell, Samantha L; Maldjian, Joseph A; Stitzel, Joel D

    2015-12-01

    Thoracic anthropometry variations with age and sex have been reported and likely relate to thoracic injury risk and outcome. The objective of this study was to collect a large volume of homologous semilandmark data from the thoracic skeleton for the purpose of quantifying thoracic morphology variations for males and females of ages 0-100 years. A semi-automated image segmentation and registration algorithm was applied to collect homologous thoracic skeleton semilandmarks from 343 normal computed tomography (CT) scans. Rigid, affine, and symmetric diffeomorphic transformations were used to register semilandmarks from an atlas to homologous locations in the subject-specific coordinate system. Homologous semilandmarks were successfully collected from 92% (7077) of the ribs and 100% (187) of the sternums included in the study. Between 2700 and 11,000 semilandmarks were collected from each rib and sternum and over 55 million total semilandmarks were collected from all subjects. The extensive landmark data collected more fully characterizes thoracic skeleton morphology across ages and sexes. Characterization of thoracic morphology with age and sex may help explain variations in thoracic injury risk and has important implications for vulnerable populations such as pediatrics and the elderly. PMID:26496701

  16. Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree.

    PubMed

    Carneiro, Gustavo; Georgescu, Bogdan; Good, Sara; Comaniciu, Dorin

    2008-09-01

    We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer. PMID:18753047

  17. Interactive explorations of hierarchical segmentations

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1992-01-01

    The authors report on the implementation of an interactive tool, called HSEGEXP, to interactively explore the hierarchical segmentation produced by the iterative parallel region growing (IPRG) algorithm to select the best segmentation result. This combination of the HSEGEXP tool with the IPRG algorithm amounts to a computer-assisted image segmentation system guided by human interaction. The initial application of the HSEGEXP tool is in the refinement of ground reference data based on the IPRG/HSEGEXP segmentation of the corresponding remotely sensed image data. The HSEGEXP tool is being used to help evaluate the effectiveness of an automatic 'best' segmentation process under development.

  18. Image segmentation using random features

    NASA Astrophysics Data System (ADS)

    Bull, Geoff; Gao, Junbin; Antolovich, Michael

    2014-01-01

    This paper presents a novel algorithm for selecting random features via compressed sensing to improve the performance of Normalized Cuts in image segmentation. Normalized Cuts is a clustering algorithm that has been widely applied to segmenting images, using features such as brightness, intervening contours and Gabor filter responses. Some drawbacks of Normalized Cuts are that computation times and memory usage can be excessive, and the obtained segmentations are often poor. This paper addresses the need to improve the processing time of Normalized Cuts while improving the segmentations. A significant proportion of the time in calculating Normalized Cuts is spent computing an affinity matrix. A new algorithm has been developed that selects random features using compressed sensing techniques to reduce the computation needed for the affinity matrix. The new algorithm, when compared to the standard implementation of Normalized Cuts for segmenting images from the BSDS500, produces better segmentations in significantly less time.

  19. Pancreas and cyst segmentation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  20. The Drosophila anatomy ontology

    PubMed Central

    2013-01-01

    Background Anatomy ontologies are query-able classifications of anatomical structures. They provide a widely-used means for standardising the annotation of phenotypes and expression in both human-readable and programmatically accessible forms. They are also frequently used to group annotations in biologically meaningful ways. Accurate annotation requires clear textual definitions for terms, ideally accompanied by images. Accurate grouping and fruitful programmatic usage requires high-quality formal definitions that can be used to automate classification and check for errors. The Drosophila anatomy ontology (DAO) consists of over 8000 classes with broad coverage of Drosophila anatomy. It has been used extensively for annotation by a range of resources, but until recently it was poorly formalised and had few textual definitions. Results We have transformed the DAO into an ontology rich in formal and textual definitions in which the majority of classifications are automated and extensive error checking ensures quality. Here we present an overview of the content of the DAO, the patterns used in its formalisation, and the various uses it has been put to. Conclusions As a result of the work described here, the DAO provides a high-quality, queryable reference for the wild-type anatomy of Drosophila melanogaster and a set of terms to annotate data related to that anatomy. Extensive, well referenced textual definitions make it both a reliable and useful reference and ensure accurate use in annotation. Wide use of formal axioms allows a large proportion of classification to be automated and the use of consistency checking to eliminate errors. This increased formalisation has resulted in significant improvements to the completeness and accuracy of classification. The broad use of both formal and informal definitions make further development of the ontology sustainable and scalable. The patterns of formalisation used in the DAO are likely to be useful to developers of other

  1. Chromosomes and clinical anatomy.

    PubMed

    Gardner, Robert James McKinlay

    2016-07-01

    Chromosome abnormalities may cast light on the nature of mechanisms whereby normal anatomy evolves, and abnormal anatomy arises. Correlating genotype to phenotype is an exercise in which the geneticist and the anatomist can collaborate. The increasing power of the new genetic methodologies is enabling an increasing precision in the delineation of chromosome imbalances, even to the nucleotide level; but the classical skills of careful observation and recording remain as crucial as they always have been. Clin. Anat. 29:540-546, 2016. © 2016 Wiley Periodicals, Inc. PMID:26990310

  2. Phasing a segmented telescope

    NASA Astrophysics Data System (ADS)

    Paykin, Irina; Yacobi, Lee; Adler, Joan; Ribak, Erez N.

    2015-02-01

    A crucial part of segmented or multiple-aperture systems is control of the optical path difference between the segments or subapertures. In order to achieve optimal performance we have to phase subapertures to within a fraction of the wavelength, and this requires high accuracy of positioning for each subaperture. We present simulations and hardware realization of a simulated annealing algorithm in an active optical system with sparse segments. In order to align the optical system we applied the optimization algorithm to the image itself. The main advantage of this method over traditional correction methods is that wave-front-sensing hardware and software are no longer required, making the optical and mechanical system much simpler. The results of simulations and laboratory experiments demonstrate the ability of this optimization algorithm to correct both piston and tip-tilt errors.

  3. Phasing a segmented telescope.

    PubMed

    Paykin, Irina; Yacobi, Lee; Adler, Joan; Ribak, Erez N

    2015-02-01

    A crucial part of segmented or multiple-aperture systems is control of the optical path difference between the segments or subapertures. In order to achieve optimal performance we have to phase subapertures to within a fraction of the wavelength, and this requires high accuracy of positioning for each subaperture. We present simulations and hardware realization of a simulated annealing algorithm in an active optical system with sparse segments. In order to align the optical system we applied the optimization algorithm to the image itself. The main advantage of this method over traditional correction methods is that wave-front-sensing hardware and software are no longer required, making the optical and mechanical system much simpler. The results of simulations and laboratory experiments demonstrate the ability of this optimization algorithm to correct both piston and tip-tilt errors. PMID:25768631

  4. Learning Anatomy Enhances Spatial Ability

    ERIC Educational Resources Information Center

    Vorstenbosch, Marc A. T. M.; Klaassen, Tim P. F. M.; Donders, A. R. T.; Kooloos, Jan G. M.; Bolhuis, Sanneke M.; Laan, Roland F. J. M.

    2013-01-01

    Spatial ability is an important factor in learning anatomy. Students with high scores on a mental rotation test (MRT) systematically score higher on anatomy examinations. This study aims to investigate if learning anatomy also oppositely improves the MRT-score. Five hundred first year students of medicine ("n" = 242, intervention) and…

  5. The Anatomy Puzzle Book.

    ERIC Educational Resources Information Center

    Jacob, Willis H.; Carter, Robert, III

    This document features review questions, crossword puzzles, and word search puzzles on human anatomy. Topics include: (1) Anatomical Terminology; (2) The Skeletal System and Joints; (3) The Muscular System; (4) The Nervous System; (5) The Eye and Ear; (6) The Circulatory System and Blood; (7) The Respiratory System; (8) The Urinary System; (9) The…

  6. Anatomy of the Honeybee

    ERIC Educational Resources Information Center

    Postiglione, Ralph

    1977-01-01

    In this insect morphology exercise, students study the external anatomy of the worker honeybee. The structures listed and illustrated are discussed in relation to their functions. A goal of the exercise is to establish the bee as a well-adapted, social insect. (MA)

  7. Illustrated Speech Anatomy.

    ERIC Educational Resources Information Center

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  8. Anatomy for Biomedical Engineers

    ERIC Educational Resources Information Center

    Carmichael, Stephen W.; Robb, Richard A.

    2008-01-01

    There is a perceived need for anatomy instruction for graduate students enrolled in a biomedical engineering program. This appeared especially important for students interested in and using medical images. These students typically did not have a strong background in biology. The authors arranged for students to dissect regions of the body that…

  9. Quantitative normal thoracic anatomy at CT.

    PubMed

    Matsumoto, Monica M S; Udupa, Jayaram K; Tong, Yubing; Saboury, Babak; Torigian, Drew A

    2016-07-01

    Automatic anatomy recognition (AAR) methodologies for a body region require detailed understanding of the morphology, architecture, and geographical layout of the organs within the body region. The aim of this paper was to quantitatively characterize the normal anatomy of the thoracic region for AAR. Contrast-enhanced chest CT images from 41 normal male subjects, each with 11 segmented objects, were considered in this study. The individual objects were quantitatively characterized in terms of their linear size, surface area, volume, shape, CT attenuation properties, inter-object distances, size and shape correlations, size-to-distance correlations, and distance-to-distance correlations. A heat map visualization approach was used for intuitively portraying the associations between parameters. Numerous new observations about object geography and relationships were made. Some objects, such as the pericardial region, vary far less than others in size across subjects. Distance relationships are more consistent when involving an object such as trachea and bronchi than other objects. Considering the inter-object distance, some objects have a more prominent correlation, such as trachea and bronchi, right and left lungs, arterial system, and esophagus. The proposed method provides new, objective, and usable knowledge about anatomy whose utility in building body-wide models toward AAR has been demonstrated in other studies. PMID:27065241

  10. An Automated Three-Dimensional Detection and Segmentation Method for Touching Cells by Integrating Concave Points Clustering and Random Walker Algorithm

    PubMed Central

    Gong, Hui; Chen, Shangbin; Zhang, Bin; Ding, Wenxiang; Luo, Qingming; Li, Anan

    2014-01-01

    Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1) concave points clustering to determine the seed points of touching cells; and 2) random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness. PMID:25111442

  11. [Anatomy of the skull].

    PubMed

    Pásztor, Emil

    2010-01-01

    The anatomy of the human body based on a special teleological system is one of the greatest miracles of the world. The skull's primary function is the defence of the brain, so every alteration or disease of the brain results in some alteration of the skull. This analogy is to be identified even in the human embryo. Proportions of the 22 bones constituting the skull and of sizes of sutures are not only the result of the phylogeny, but those of the ontogeny as well. E.g. the age of the skeletons in archaeological findings could be identified according to these facts. Present paper outlines the ontogeny and development of the tissues of the skull, of the structure of the bone-tissue, of the changes of the size of the skull and of its parts during the different periods of human life, reflecting to the aesthetics of the skull as well. "Only the human scull can give me an impression of beauty. In spite of all genetical colseness, a skull of a chimpanzee cannot impress me aesthetically"--author confesses. In the second part of the treatise those authors are listed, who contributed to the perfection of our knowledge regarding the skull. First of all the great founder of modern anatomy, Andreas Vesalius, then Pierre Paul Broca, Jacob Benignus Winslow are mentioned here. The most important Hungarian contributors were as follow: Sámuel Rácz, Pál Bugát or--the former assistant of Broca--Aurél Török. A widely used tool for measurement of the size of the skull, the craniometer was invented by the latter. The members of the family Lenhossék have had also important results in this field of research, while descriptive anatomy of the skull was completed by microsopical anatomy thanks the activity of Géza Mihálkovits. PMID:21661257

  12. Human ocular anatomy.

    PubMed

    Kels, Barry D; Grzybowski, Andrzej; Grant-Kels, Jane M

    2015-01-01

    We review the normal anatomy of the human globe, eyelids, and lacrimal system. This contribution explores both the form and function of numerous anatomic features of the human ocular system, which are vital to a comprehensive understanding of the pathophysiology of many oculocutaneous diseases. The review concludes with a reference glossary of selective ophthalmologic terms that are relevant to a thorough understanding of many oculocutaneous disease processes. PMID:25704934

  13. Segmentation of anatomical branching structures based on texture features and conditional random field

    NASA Astrophysics Data System (ADS)

    Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin

    2012-02-01

    This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.

  14. Executions and scientific anatomy.

    PubMed

    Dolezal, Antonín; Jelen, Karel; Stajnrtova, Olga

    2015-12-01

    The very word "anatomy" tells us about this branch's connection with dissection. Studies of anatomy have taken place for approximately 2.300 years already. Anatomy's birthplace lies in Greece and Egypt. Knowledge in this specific field of science was necessary during surgical procedures in ophthalmology and obstetrics. Embalming took place without public disapproval just like autopsies and manipulation with relics. Thus, anatomical dissection became part of later forensic sciences. Anatomical studies on humans themselves, which needed to be compared with the knowledge gained through studying procedures performed on animals, elicited public disapprobation and prohibition. When faced with a shortage of cadavers, anatomists resorted to obtaining bodies of the executed and suicide victims - since torture, public display of the mutilated body, (including anatomical autopsy), were perceived as an intensification of the death penalty. Decapitation and hanging were the main execution methods meted out for death sentences. Anatomists preferred intact bodies for dissection; hence, convicts could thus avoid torture. This paper lists examples of how this process was resolved. It concerns the manners of killing, vivisection on people in the antiquity and middle-ages, experiments before the execution and after, vivifying from seeming death, experiments with galvanizing electricity on fresh cadavers, evaluating of sensibility after guillotine execution, and making perfect anatomical preparations and publications during Nazism from fresh bodies of the executed. PMID:26859596

  15. Scene segmentation through region growing

    NASA Technical Reports Server (NTRS)

    Latty, R. S.

    1984-01-01

    A computer algorithm to segment Landsat Thematic Mapper (TM) images into areas representing surface features is described. The algorithm is based on a region growing approach and uses edge elements and edge element orientation to define the limits of the surface features. Adjacent regions which are not separated by edges are linked to form larger regions. Some of the advantages of scene segmentation over conventional TM image extraction algorithms are discussed, including surface feature analysis on a pixel-by-pixel basis, and faster identification of the pixels in each region. A detailed flow diagram of region growing algorithm is provided.

  16. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  17. Parallel Fuzzy Segmentation of Multiple Objects.

    PubMed

    Garduño, Edgar; Herman, Gabor T

    2008-01-01

    The usefulness of fuzzy segmentation algorithms based on fuzzy connectedness principles has been established in numerous publications. New technologies are capable of producing larger-and-larger datasets and this causes the sequential implementations of fuzzy segmentation algorithms to be time-consuming. We have adapted a sequential fuzzy segmentation algorithm to multi-processor machines. We demonstrate the efficacy of such a distributed fuzzy segmentation algorithm by testing it with large datasets (of the order of 50 million points/voxels/items): a speed-up factor of approximately five over the sequential implementation seems to be the norm. PMID:19444333

  18. Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2015-03-01

    Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.

  19. Automatic segmentation of cartilage in high-field magnetic resonance images of the knee joint with an improved voxel-classification-driven region-growing algorithm using vicinity-correlated subsampling.

    PubMed

    Öztürk, Ceyda Nur; Albayrak, Songül

    2016-05-01

    Anatomical structures that can deteriorate over time, such as cartilage, can be successfully delineated with voxel-classification approaches in magnetic resonance (MR) images. However, segmentation via voxel-classification is a computationally demanding process for high-field MR images with high spatial resolutions. In this study, the whole femoral, tibial, and patellar cartilage compartments in the knee joint were automatically segmented in high-field MR images obtained from Osteoarthritis Initiative using a voxel-classification-driven region-growing algorithm with sample-expand method. Computational complexity of the classification was alleviated via subsampling of the background voxels in the training MR images and selecting a small subset of significant features by taking into consideration systems with limited memory and processing power. Although subsampling of the voxels may lead to a loss of generality of the training models and a decrease in segmentation accuracies, effective subsampling strategies can overcome these problems. Therefore, different subsampling techniques, which involve uniform, Gaussian, vicinity-correlated (VC) sparse, and VC dense subsampling, were used to generate four training models. The segmentation system was experimented using 10 training and 23 testing MR images, and the effects of different training models on segmentation accuracies were investigated. Experimental results showed that the highest mean Dice similarity coefficient (DSC) values for all compartments were obtained when the training models of VC sparse subsampling technique were used. Mean DSC values optimized with this technique were 82.6%, 83.1%, and 72.6% for femoral, tibial, and patellar cartilage compartments, respectively, when mean sensitivities were 79.9%, 84.0%, and 71.5%, and mean specificities were 99.8%, 99.9%, and 99.9%. PMID:27017069

  20. Who Is Repeating Anatomy? Trends in an Undergraduate Anatomy Course

    ERIC Educational Resources Information Center

    Schutte, Audra F.

    2016-01-01

    Anatomy courses frequently serve as prerequisites or requirements for health sciences programs. Due to the challenging nature of anatomy, each semester there are students remediating the course (enrolled in the course for a second time), attempting to earn a grade competitive for admissions into a program of study. In this retrospective study,…

  1. [Pandora's box of anatomy].

    PubMed

    Weinberg, Uri; Reis, Shmuel

    2008-05-01

    Physicians in Nazi Germany were among the first to join the Nazi party and the SS, and were considered passionate and active supporters of the regime. Their actions included development and implementation of the racial theory thus legitimizing the development of the Nazi genocide plan, leadership and execution of the sterilization and euthanasia programs as well as atrocious human experimentation. Nazi law allowed the use of humans and their remains in research institutions. One of the physicians whose involvement in the Nazi regime was particularly significant was Eduard Pernkopf. He was the head of the Anatomy Institute at the University of Vienna, and later became the president of the university. Pernkopf was a member of the Nazi party, promoted the idea of "racial hygiene", and in 1938, "purified" the university from all Jews. In Pernkopfs atlas of anatomy, the illustrators expressed their sympathy to Nazism by adding Nazi symbols to their illustrations. In light of the demand stated by the "Yad Vashem" Institute, the sources of the atlas were investigated. The report, which was published in 1998, determined that Pernkopfs Anatomy Institute received almost 1400 corpses from the Gestapo's execution chambers. Copies of Pernkopfs atlas, accidentally exposed at the Rappaport School of Medicine in the Technion, led to dilemmas concerning similar works with a common background. The books initiated a wide debate in Israel and abroad, regarding ethical aspects of using information originated in Nazi crimes. Moreover, these findings are evidence of the evil to which science and medicine can give rise, when they are captured as an unshakable authority. PMID:18770971

  2. Segmental neurofibromatosis.

    PubMed

    Galhotra, Virat; Sheikh, Soheyl; Jindal, Sanjeev; Singla, Anshu

    2014-07-01

    Segmental neurofibromatosis is a rare disorder, characterized by neurofibromas or cafι-au-lait macules limited to one region of the body. Its occurrence on the face is extremely rare and only few cases of segmental neurofibromatosis over the face have been described so far. We present a case of segmental neurofibromatosis involving the buccal mucosa, tongue, cheek, ear, and neck on the right side of the face. PMID:25565748

  3. How Much Anatomy Is Enough?

    ERIC Educational Resources Information Center

    Bergman, Esther M.; Prince, Katinka J. A. H.; Drukker, Jan; van der Vleuten, Cees P. M.; Scherpbier, Albert J. J. A.

    2008-01-01

    Innovations in undergraduate medical education, such as integration of disciplines and problem based learning, have given rise to concerns about students' knowledge of anatomy. This article originated from several studies investigating the knowledge of anatomy of students at the eight Dutch medical schools. The studies showed that undergraduate…

  4. Health Instruction Packages: Cardiac Anatomy.

    ERIC Educational Resources Information Center

    Phillips, Gwen; And Others

    Text, illustrations, and exercises are utilized in these five learning modules to instruct nurses, students, and other health care professionals in cardiac anatomy and functions and in fundamental electrocardiographic techniques. The first module, "Cardiac Anatomy and Physiology: A Review" by Gwen Phillips, teaches the learner to draw and label…

  5. The quail anatomy portal.

    PubMed

    Ruparelia, Avnika A; Simkin, Johanna E; Salgado, David; Newgreen, Donald F; Martins, Gabriel G; Bryson-Richardson, Robert J

    2014-01-01

    The Japanese quail is a widely used model organism for the study of embryonic development; however, anatomical resources are lacking. The Quail Anatomy Portal (QAP) provides 22 detailed three-dimensional (3D) models of quail embryos during development from embryonic day (E)1 to E15 generated using optical projection tomography. The 3D models provided can be virtually sectioned to investigate anatomy. Furthermore, using the 3D nature of the models, we have generated a tool to assist in the staging of quail samples. Volume renderings of each stage are provided and can be rotated to allow visualization from multiple angles allowing easy comparison of features both between stages in the database and between images or samples in the laboratory. The use of JavaScript, PHP and HTML ensure the database is accessible to users across different operating systems, including mobile devices, facilitating its use in the laboratory.The QAP provides a unique resource for researchers using the quail model. The ability to virtually section anatomical models throughout development provides the opportunity for researchers to virtually dissect the quail and also provides a valuable tool for the education of students and researchers new to the field. DATABASE URL: http://quail.anatomyportal.org (For review username: demo, password: quail123). PMID:24715219

  6. Radiological sinonasal anatomy

    PubMed Central

    Alrumaih, Redha A.; Ashoor, Mona M.; Obidan, Ahmed A.; Al-Khater, Khulood M.; Al-Jubran, Saeed A.

    2016-01-01

    Objectives: To assess the prevalence of common radiological variants of sinonasal anatomy among Saudi population and compare it with the reported prevalence of these variants in other ethnic and population groups. Methods: This is a retrospective cross-sectional study of 121 computerized tomography scans of the nose and paranasal sinuses of patients presented with sinonasal symptoms to the Department of Otorhinolarngology, King Fahad Hospital of the University, Khobar, Saudi Arabia, between January 2014 and May 2014. Results: Scans of 121 patients fulfilled inclusion criteria were reviewed. Concha bullosa was found in 55.4%, Haller cell in 39.7%, and Onodi cell in 28.9%. Dehiscence of the internal carotid artery was found in 1.65%. Type-1 and type-2 optic nerve were the prevalent types. Type-II Keros classification of the depth of olfactory fossa was the most common among the sample (52.9%). Frontal cells were found in 79.3%; type I was the most common. Conclusions: There is a difference in the prevalence of some radiological variants of the sinonasal anatomy between Saudi population and other study groups. Surgeon must pay special attention in the preoperative assessment of patients with sinonasal pathology to avoid undesirable complications. PMID:27146614

  7. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  8. Neuro-Fuzzy Phasing of Segmented Mirrors

    NASA Technical Reports Server (NTRS)

    Olivier, Philip D.

    1999-01-01

    A new phasing algorithm for segmented mirrors based on neuro-fuzzy techniques is described. A unique feature of this algorithm is the introduction of an observer bank. Its effectiveness is tested in a very simple model with remarkable success. The new algorithm requires much less computational effort than existing algorithms and therefore promises to be quite useful when implemented on more complex models.

  9. Segmental neurofibromatosis.

    PubMed

    Toy, Brian

    2003-10-01

    Segmental neurofibromatosis is a rare variant of neurofibromatosis in which skin lesions are confined to a circumscribed body segment. A case of a 72-year-old woman with this condition is presented. Clinical features and genetic evidence are reviewed. PMID:14594599

  10. An adaptive 3D region growing algorithm to automatically segment and identify thoracic aorta and its centerline using computed tomography angiography scans

    NASA Astrophysics Data System (ADS)

    Ferreira, F.; Dehmeshki, J.; Amin, H.; Dehkordi, M. E.; Belli, A.; Jouannic, A.; Qanadli, S.

    2010-03-01

    Thoracic Aortic Aneurysm (TAA) is a localized swelling of the thoracic aorta. The progressive growth of an aneurysm may eventually cause a rupture if not diagnosed or treated. This necessitates the need for an accurate measurement which in turn calls for the accurate segmentation of the aneurysm regions. Computer Aided Detection (CAD) is a tool to automatically detect and segment the TAA in the Computer tomography angiography (CTA) images. The fundamental major step of developing such a system is to develop a robust method for the detection of main vessel and measuring its diameters. In this paper we propose a novel adaptive method to simultaneously segment the thoracic aorta and to indentify its center line. For this purpose, an adaptive parametric 3D region growing is proposed in which its seed will be automatically selected through the detection of the celiac artery and the parameters of the method will be re-estimated while the region is growing thorough the aorta. At each phase of region growing the initial center line of aorta will also be identified and modified through the process. Thus the proposed method simultaneously detect aorta and identify its centerline. The method has been applied on CT images from 20 patients with good agreement with the visual assessment by two radiologists.

  11. Combining prior day contours to improve automated prostate segmentation

    SciTech Connect

    Godley, Andrew; Sheplan Olsen, Lawrence J.; Stephans, Kevin; Zhao Anzi

    2013-02-15

    Purpose: To improve the accuracy of automatically segmented prostate, rectum, and bladder contours required for online adaptive therapy. The contouring accuracy on the current image guidance [image guided radiation therapy (IGRT)] scan is improved by combining contours from earlier IGRT scans via the simultaneous truth and performance level estimation (STAPLE) algorithm. Methods: Six IGRT prostate patients treated with daily kilo-voltage (kV) cone-beam CT (CBCT) had their original plan CT and nine CBCTs contoured by the same physician. Three types of automated contours were produced for analysis. (1) Plan: By deformably registering the plan CT to each CBCT and then using the resulting deformation field to morph the plan contours to match the CBCT anatomy. (2) Previous: The contour set drawn by the physician on the previous day CBCT is similarly deformed to match the current CBCT anatomy. (3) STAPLE: The contours drawn by the physician, on each prior CBCT and the plan CT, are deformed to match the CBCT anatomy to produce multiple contour sets. These sets are combined using the STAPLE algorithm into one optimal set. Results: Compared to plan and previous, STAPLE improved the average Dice's coefficient (DC) with the original physician drawn CBCT contours to a DC as follows: Bladder: 0.81 {+-} 0.13, 0.91 {+-} 0.06, and 0.92 {+-} 0.06; Prostate: 0.75 {+-} 0.08, 0.82 {+-} 0.05, and 0.84 {+-} 0.05; and Rectum: 0.79 {+-} 0.06, 0.81 {+-} 0.06, and 0.85 {+-} 0.04, respectively. The STAPLE results are within intraobserver consistency, determined by the physician blindly recontouring a subset of CBCTs. Comparing plans recalculated using the physician and STAPLE contours showed an average disagreement less than 1% for prostate D98 and mean dose, and 5% and 3% for bladder and rectum mean dose, respectively. One scan takes an average of 19 s to contour. Using five scans plus STAPLE takes less than 110 s on a 288 core graphics processor unit. Conclusions: Combining the plan and

  12. Fully Automated Whole-Head Segmentation with Improved Smoothness and Continuity, with Theory Reviewed

    PubMed Central

    Huang, Yu; Parra, Lucas C.

    2015-01-01

    Individualized current-flow models are needed for precise targeting of brain structures using transcranial electrical or magnetic stimulation (TES/TMS). The same is true for current-source reconstruction in electroencephalography and magnetoencephalography (EEG/MEG). The first step in generating such models is to obtain an accurate segmentation of individual head anatomy, including not only brain but also cerebrospinal fluid (CSF), skull and soft tissues, with a field of view (FOV) that covers the whole head. Currently available automated segmentation tools only provide results for brain tissues, have a limited FOV, and do not guarantee continuity and smoothness of tissues, which is crucially important for accurate current-flow estimates. Here we present a tool that addresses these needs. It is based on a rigorous Bayesian inference framework that combines image intensity model, anatomical prior (atlas) and morphological constraints using Markov random fields (MRF). The method is evaluated on 20 simulated and 8 real head volumes acquired with magnetic resonance imaging (MRI) at 1 mm3 resolution. We find improved surface smoothness and continuity as compared to the segmentation algorithms currently implemented in Statistical Parametric Mapping (SPM). With this tool, accurate and morphologically correct modeling of the whole-head anatomy for individual subjects may now be feasible on a routine basis. Code and data are fully integrated into SPM software tool and are made publicly available. In addition, a review on the MRI segmentation using atlas and the MRF over the last 20 years is also provided, with the general mathematical framework clearly derived. PMID:25992793

  13. Deformable templates guided discriminative models for robust 3D brain MRI segmentation.

    PubMed

    Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen

    2013-10-01

    Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems. PMID:23836390

  14. Three-dimensional segmentation of the tumor mass in computed tomographic images of neuroblastoma

    NASA Astrophysics Data System (ADS)

    Deglint, Hanford J.; Rangayyan, Rangaraj M.; Boag, Graham S.

    2004-05-01

    Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, fibrosis, and normal tissue are often intermixed. Rather than attempt to separate these tissue types into distinct regions, we propose to explore methods to delineate the normal structures expected in abdominal CT images, remove them from further consideration, and examine the remaining parts of the images for the tumor mass. We explore the use of fuzzy connectivity for this purpose. Expert knowledge provided by the radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are also incorporated in the segmentation algorithm. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to chemotherapy and in the planning of delayed surgery for resection of the tumor. The performance of the algorithm is evaluated using cases acquired from the Alberta Children's Hospital.

  15. Evaluation of multiatlas label fusion for in vivo magnetic resonance imaging orbital segmentation

    PubMed Central

    Panda, Swetasudha; Asman, Andrew J.; Khare, Shweta P.; Thompson, Lindsey; Mawn, Louise A.; Smith, Seth A.; Landman, Bennett A.

    2014-01-01

    Abstract. Multiatlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. We evaluate seven statistical and voting-based label fusion algorithms (and six additional variants) to segment the optic nerves, eye globes, and chiasm. For nonlocal simultaneous truth and performance level estimation (STAPLE), we evaluate different intensity similarity measures (including mean square difference, locally normalized cross-correlation, and a hybrid approach). Each algorithm is evaluated in terms of the Dice overlap and symmetric surface distance metrics. Finally, we evaluate refinement of label fusion results using a learning-based correction method for consistent bias correction and Markov random field regularization. The multiatlas labeling pipelines were evaluated on a cohort of 35 subjects including both healthy controls and patients. Across all three structures, nonlocal spatial STAPLE (NLSS) with a mixed weighting type provided the most consistent results; for the optic nerve NLSS resulted in a median Dice similarity coefficient of 0.81, mean surface distance of 0.41 mm, and Hausdorff distance 2.18 mm for the optic nerves. Joint label fusion resulted in slightly superior median performance for the optic nerves (0.82, 0.39 mm, and 2.15 mm), but slightly worse on the globes. The fully automated multiatlas labeling approach provides robust segmentations of orbital structures on magnetic resonance imaging even in patients for whom significant atrophy (optic nerve head drusen) or inflammation (multiple sclerosis) is present. PMID:25558466

  16. Pleura space anatomy

    PubMed Central

    Charalampidis, Charalampos; Youroukou, Andrianna; Lazaridis, George; Baka, Sofia; Mpoukovinas, Ioannis; Karavasilis, Vasilis; Kioumis, Ioannis; Pitsiou, Georgia; Papaiwannou, Antonis; Karavergou, Anastasia; Tsakiridis, Kosmas; Katsikogiannis, Nikolaos; Sarika, Eirini; Kapanidis, Konstantinos; Sakkas, Leonidas; Korantzis, Ipokratis; Lampaki, Sofia; Zarogoulidis, Konstantinos

    2015-01-01

    The pleural cavity is the potential space between the two pleurae (visceral and parietal) of the lungs. The pleurae are serous membranes which fold back onto themselves to form a two-layered membranous structure. The thin space between the two pleural layers is known as the pleural cavity and normally contains a small amount of pleural fluid. There are two layers; the outer pleura (parietal pleura) is attached to the chest wall and the inner pleura (visceral pleura) covers the lungs and adjoining structures, via blood vessels, bronchi and nerves. The parietal pleurae are highly sensitive to pain, while the visceral pleura are not, due to its lack of sensory innervation. In the current review we will present the anatomy of the pleural space. PMID:25774304

  17. The Anatomy of Galaxies

    NASA Astrophysics Data System (ADS)

    D'Onofrio, Mauro; Rampazzo, Roberto; Zaggia, Simone; Longair, Malcolm S.; Ferrarese, Laura; Marziani, Paola; Sulentic, Jack W.; van der Kruit, Pieter C.; Laurikainen, Eija; Elmegreen, Debra M.; Combes, Françoise; Bertin, Giuseppe; Fabbiano, Giuseppina; Giovanelli, Riccardo; Calzetti, Daniela; Moss, David L.; Matteucci, Francesca; Djorgovski, Stanislav George; Fraix-Burnet, Didier; Graham, Alister W. McK.; Tully, Brent R.

    Just after WWII Astronomy started to live its "Golden Age", not differently to many other sciences and human activities, especially in the west side countries. The improved resolution of telescopes and the appearance of new efficient light detectors (e.g. CCDs in the middle eighty) greatly impacted the extragalactic researches. The first morphological analysis of galaxies were rapidly substituted by "anatomic" studies of their structural components, star and gas content, and in general by detailed investigations of their properties. As for the human anatomy, where the final goal was that of understanding the functionality of the organs that are essential for the life of the body, galaxies were dissected to discover their basic structural components and ultimately the mystery of their existence.

  18. [Surgery without anatomy?].

    PubMed

    Stelzner, F

    2016-08-01

    Anatomy is the basis of all operative medicine. While this branch of scientific medicine is frequently not explicitly mentioned in surgical publications, it is nonetheless quintessential to medical education. In the era of video sequences and digitized images, surgical methods are frequently communicated in the form of cinematic documentation of surgical procedures; however, this occurs without the help of explanatory drawings or subtexts that would illustrate the underlying anatomical nomenclature, comment on fine functionally important details or even without making any mention of the surgeon. In scientific manuscripts color illustrations frequently appear in such overwhelming quantities that they resemble long arrays of trophies but fail to give detailed explanations that would aid the therapeutic translation of the novel datasets. In a similar fashion, many anatomy textbooks prefer to place emphasis on illustrations and photographs while supplying only a paucity of explanations that would foster the understanding of functional contexts and thus confuse students and practitioners alike. There is great temptation to repeat existing data and facts over and over again, while it is proportionally rare to make reference to truly original scientific discoveries. A number of examples are given in this article to illustrate how discoveries that were made even a long time ago can still contribute to scientific progress in current times. This includes the NO signaling molecules, which were first described in 1775 but were only discovered to have a pivotal role as neurotransmitters in the function of human paradoxical sphincter muscles in 2012 and 2015. Readers of scientific manuscripts often long for explanations by the numerous silent coauthors of a publication who could contribute to the main topic by adding in-depth illustrations (e. g. malignograms, evolution and involution of lymph node structures). PMID:27251482

  19. Automatic brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Clark, Matthew C.; Hall, Lawrence O.; Goldgof, Dmitry B.; Velthuizen, Robert P.; Murtaugh, F. R.; Silbiger, Martin L.

    1998-06-01

    A system that automatically segments and labels complete glioblastoma-multiform tumor volumes in magnetic resonance images of the human brain is presented. The magnetic resonance images consist of three feature images (T1- weighted, proton density, T2-weighted) and are processed by a system which integrates knowledge-based techniques with multispectral analysis and is independent of a particular magnetic resonance scanning protocol. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intra-cranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intra-cranial region, with region analysis used in performing the final tumor labeling. This system has been trained on eleven volume data sets and tested on twenty-two unseen volume data sets acquired from a single magnetic resonance imaging system. The knowledge-based tumor segmentation was compared with radiologist-verified `ground truth' tumor volumes and results generated by a supervised fuzzy clustering algorithm. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.

  20. Carpal Ligament Anatomy and Biomechanics.

    PubMed

    Pulos, Nicholas; Bozentka, David J

    2015-08-01

    A fundamental understanding of the ligamentous anatomy of the wrist is critical for any physician attempting to treat carpal instability. The anatomy of the wrist is complex, not only because of the number of named structures and their geometry but also because of the inconsistencies in describing these ligaments. The complex anatomy of the wrist is described through a review of the carpal ligaments and their effect on normal carpal motion. Mastery of this topic facilitates the physician's understanding of the patterns of instability that are seen clinically. PMID:26205699

  1. [The French lessons of anatomy].

    PubMed

    Bouchet, Alain

    2003-01-01

    The "Lessons of Anatomy" can be considered as a step of Medicine to Art. For several centuries the exhibition of a corpse's dissection was printed on the title-page of published works. Since the seventeenth century, the "Lessons of Anatomy" became a picture on the title-page in order to highlight the well-known names of the european anatomists. The study is limited to the French Lessons of Anatomy found in books or pictures after the invention of printing. PMID:14626253

  2. Hyperspectral image segmentation of the common bile duct

    NASA Astrophysics Data System (ADS)

    Samarov, Daniel; Wehner, Eleanor; Schwarz, Roderich; Zuzak, Karel; Livingston, Edward

    2013-03-01

    Over the course of the last several years hyperspectral imaging (HSI) has seen increased usage in biomedicine. Within the medical field in particular HSI has been recognized as having the potential to make an immediate impact by reducing the risks and complications associated with laparotomies (surgical procedures involving large incisions into the abdominal wall) and related procedures. There are several ongoing studies focused on such applications. Hyperspectral images were acquired during pancreatoduodenectomies (commonly referred to as Whipple procedures), a surgical procedure done to remove cancerous tumors involving the pancreas and gallbladder. As a result of the complexity of the local anatomy, identifying where the common bile duct (CBD) is can be difficult, resulting in comparatively high incidents of injury to the CBD and associated complications. It is here that HSI has the potential to help reduce the risk of such events from happening. Because the bile contained within the CBD exhibits a unique spectral signature, we are able to utilize HSI segmentation algorithms to help in identifying where the CBD is. In the work presented here we discuss approaches to this segmentation problem and present the results.

  3. Automatic segmentation of psoriasis lesions

    NASA Astrophysics Data System (ADS)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  4. Scorpion image segmentation system

    NASA Astrophysics Data System (ADS)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  5. A new osteophyte segmentation algorithm using partial shape model and its applications to rabbit femur anterior cruciate ligament transection via micro-CT imaging.

    PubMed

    Saha, P K; Liang, G; Elkins, J M; Coimbra, A; Duong, L T; Williams, D S; Sonka, M

    2011-08-01

    Osteophyte is an additional bony growth on a normal bone surface limiting or stopping motion at a deteriorating joint. Detection and quantification of osteophytes from CT images is helpful in assessing disease status as well as treatment and surgery planning. However, it is difficult to distinguish between osteophytes and healthy bones using simple thresholding or edge/texture features due to the similarity of their material composition. In this paper, we present a new method primarily based active shape model (ASM) to solve this problem and evaluate its application to anterior cruciate ligament transection (ACLT) rabbit femur model via CT imaging. The common idea behind most ASM based segmentation methods is to first build a parametric shape model from a training dataset and apply the model to find a shape instance in a target image. A common challenge with such approaches is that a diseased bone shape is significantly altered at regions with osteophyte deposition misguiding an ASM method and eventually leading to suboptimum segmentations. This difficulty is overcome using a new partial ASM method that uses bone shape over healthy regions and extrapolates it over the diseased region according to the underlying shape model. Finally, osteophytes are segmented by subtracting partial-ASM derived shape from the overall diseased shape. Also, a new semi-automatic method is presented in this paper for efficiently building a 3D shape model for an anatomic region using manual reference of a few anatomically defined fiducial landmarks that are highly reproducible on individuals. Accuracy of the method has been examined on simulated phantoms while reproducibility and sensitivity have been evaluated on CT images of 2-, 4- and 8-week post-ACLT and sham-treated rabbit femurs. Experimental results have shown that the method is highly accurate ( R2 = 0.99), reproducible (ICC = 0.97), and sensitive in detecting disease progression (p-values: 0.065,0.001 and < 0.001 for 2- vs. 4, 4

  6. A New Osteophyte Segmentation Algorithm Using the Partial Shape Model and Its Applications to Rabbit Femur Anterior Cruciate Ligament Transection via Micro-CT Imaging

    PubMed Central

    Liang, G.; Elkins, J. M.; Coimbra, A.; Duong, L. T.; Williams, D. S.; Sonka, M.

    2015-01-01

    Osteophyte is an additional bony growth on a normal bone surface limiting or stopping motion at a deteriorating joint. Detection and quantification of osteophytes from computed tomography (CT) images is helpful in assessing disease status as well as treatment and surgery planning. However, it is difficult to distinguish between osteophytes and healthy bones using simple thresholding or edge/texture features due to the similarity of their material composition. In this paper, we present a new method primarily based on the active shape model (ASM) to solve this problem and evaluate its application to the anterior cruciate ligament transaction (ACLT) rabbit femur model via micro-CT imaging. The common idea behind most ASM-based segmentation methods is to first build a parametric shape model from a training dataset and then apply the model to find a shape instance in a target image. A common challenge with such approaches is that a diseased bone shape is significantly altered at regions with osteophyte deposition misguiding an ASM method and eventually leading to suboptimum segmentations. This difficulty is overcome using a new partial-ASM method that uses bone shape over healthy regions and extrapolates it over the diseased region according to the underlying shape model. Finally, osteophytes are segmented by subtracting partial-ASM-derived shape from the overall diseased shape. Also, a new semiautomatic method is presented in this paper for efficiently building a 3-D shape model for an anatomic region using manual reference of a few anatomically defined fiducial landmarks that are highly reproducible on individuals. Accuracy of the method has been examined on simulated phantoms while reproducibility and sensitivity have been evaluated on micro-CT images of 2-, 4- and 8-week post-ACLT and sham-treated rabbit femurs. Experimental results have shown that the method is highly accurate (R2 = 0.99), reproducible (ICC = 0.97), and sensitive in detecting disease progression (p

  7. OLFACTION: ANATOMY, PHYSIOLOGY AND BEHAVIOR

    EPA Science Inventory

    The anatomy, physiology and function of the olfactory system are reviewed, as are the normal effects of olfactory stimulation. It is speculated that olfaction may have important but unobtrusive effects on human behavior.

  8. Surgical Anatomy of the Eyelids.

    PubMed

    Sand, Jordan P; Zhu, Bovey Z; Desai, Shaun C

    2016-05-01

    Slight alterations in the intricate anatomy of the upper and lower eyelid or their underlying structures can have pronounced consequences for ocular esthetics and function. The understanding of periorbital structures and their interrelationships continues to evolve and requires consideration when performing complex eyelid interventions. Maintaining a detailed appreciation of this region is critical to successful cosmetic or reconstructive surgery. This article presents a current review of the anatomy of the upper and lower eyelid with a focus on surgical implications. PMID:27105794

  9. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  10. Segmentation of Indus Texts: A Dynamic Programming Approach.

    ERIC Educational Resources Information Center

    Siromoney, Gift; Huq, Abdul

    1988-01-01

    Demonstrates how a dynamic programing algorithm can be developed to segment unusually long written inscriptions from the Indus Valley Civilization. Explains the problem of segmentation, discusses the dynamic programing algorithm used, and includes tables which illustrate the segmentation of the inscriptions. (GEA)

  11. Hierarchical image segmentation for learning object priors

    SciTech Connect

    Prasad, Lakshman; Yang, Xingwei; Latecki, Longin J; Li, Nan

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  12. Bayesian segmentation of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali

    2004-11-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  13. Template characterization and correlation algorithm created from segmentation for the iris biometric authentication based on analysis of textures implemented on a FPGA

    NASA Astrophysics Data System (ADS)

    Giacometto, F. J.; Vilardy, J. M.; Torres, C. O.; Mattos, L.

    2011-01-01

    Among the most used biometric signals to set personal security permissions, taker increasingly importance biometric iris recognition based on their textures and images of blood vessels due to the rich in these two unique characteristics that are unique to each individual. This paper presents an implementation of an algorithm characterization and correlation of templates created for biometric authentication based on iris texture analysis programmed on a FPGA (Field Programmable Gate Array), authentication is based on processes like characterization methods based on frequency analysis of the sample, and frequency correlation to obtain the expected results of authentication.

  14. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  15. Anatomy of an incident

    DOE PAGESBeta

    Cournoyer, Michael E.; Trujillo, Stanley; Lawton, Cindy M.; Land, Whitney M.; Schreiber, Stephen B.

    2016-03-23

    A traditional view of incidents is that they are caused by shortcomings in human competence, attention, or attitude. It may be under the label of “loss of situational awareness,” procedure “violation,” or “poor” management. A different view is that human error is not the cause of failure, but a symptom of failure – trouble deeper inside the system. In this perspective, human error is not the conclusion, but rather the starting point of investigations. During an investigation, three types of information are gathered: physical, documentary, and human (recall/experience). Through the causal analysis process, apparent cause or apparent causes are identifiedmore » as the most probable cause or causes of an incident or condition that management has the control to fix and for which effective recommendations for corrective actions can be generated. A causal analysis identifies relevant human performance factors. In the following presentation, the anatomy of a radiological incident is discussed, and one case study is presented. We analyzed the contributing factors that caused a radiological incident. When underlying conditions, decisions, actions, and inactions that contribute to the incident are identified. This includes weaknesses that may warrant improvements that tolerate error. Measures that reduce consequences or likelihood of recurrence are discussed.« less

  16. Anatomy of trisomy 18.

    PubMed

    Roberts, Wallisa; Zurada, Anna; Zurada-ZieliŃSka, Agnieszka; Gielecki, Jerzy; Loukas, Marios

    2016-07-01

    Trisomy 18 is the second most common aneuploidy after trisomy 21. Due to its multi-systemic defects, it has a poor prognosis with a 50% chance of survival beyond one week and a <10% chance of survival beyond one year of life. However, this prognosis has been challenged by the introduction of aggressive interventional therapies for patients born with trisomy 18. As a result, a review of the anatomy associated with this defect is imperative. While any of the systems can be affected by trisomy 18, the following areas are the most likely to be affected: craniofacial, musculoskeletal system, cardiac system, abdominal, and nervous system. More specifically, the following features are considered characteristic of trisomy 18: low-set ears, rocker bottom feet, clenched fists, and ventricular septal defect. Of particular interest is the associated cardiac defect, as surgical repairs of these defects have shown an improved survivability. In this article, the anatomical defects associated with each system are reviewed. Clin. Anat. 29:628-632, 2016. © 2016 Wiley Periodicals, Inc. PMID:27087248

  17. Penile embryology and anatomy.

    PubMed

    Yiee, Jenny H; Baskin, Laurence S

    2010-01-01

    Knowledge of penile embryology and anatomy is essential to any pediatric urologist in order to fully understand and treat congenital anomalies. Sex differentiation of the external genitalia occurs between the 7th and 17th weeks of gestation. The Y chromosome initiates male differentiation through the SRY gene, which triggers testicular development. Under the influence of androgens produced by the testes, external genitalia then develop into the penis and scrotum. Dorsal nerves supply penile skin sensation and lie within Buck's fascia. These nerves are notably absent at the 12 o'clock position. Perineal nerves supply skin sensation to the ventral shaft skin and frenulum. Cavernosal nerves lie within the corpora cavernosa and are responsible for sexual function. Paired cavernosal, dorsal, and bulbourethral arteries have extensive anastomotic connections. During erection, the cavernosal artery causes engorgement of the cavernosa, while the deep dorsal artery leads to glans enlargement. The majority of venous drainage occurs through a single, deep dorsal vein into which multiple emissary veins from the corpora and circumflex veins from the spongiosum drain. The corpora cavernosa and spongiosum are all made of spongy erectile tissue. Buck's fascia circumferentially envelops all three structures, splitting into two leaves ventrally at the spongiosum. The male urethra is composed of six parts: bladder neck, prostatic, membranous, bulbous, penile, and fossa navicularis. The urethra receives its blood supply from both proximal and distal directions. PMID:20602076

  18. Modified Recursive Hierarchical Segmentation of Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2006-01-01

    An algorithm and a computer program that implements the algorithm that performs recursive hierarchical segmentation (RHSEG) of data have been developed. While the current implementation is for two-dimensional data having spatial characteristics (e.g., image, spectral, or spectral-image data), the generalized algorithm also applies to three-dimensional or higher dimensional data and also to data with no spatial characteristics. The algorithm and software are modified versions of a prior RHSEG algorithm and software, the outputs of which often contain processing-window artifacts including, for example, spurious segmentation-image regions along the boundaries of processing-window edges.

  19. Segmented combustor

    NASA Technical Reports Server (NTRS)

    Halila, Ely E. (Inventor)

    1994-01-01

    A combustor liner segment includes a panel having four sidewalls forming a rectangular outer perimeter. A plurality of integral supporting lugs are disposed substantially perpendicularly to the panel and extend from respective ones of the four sidewalls. A plurality of integral bosses are disposed substantially perpendicularly to the panel and extend from respective ones of the four sidewalls, with the bosses being shorter than the lugs. In one embodiment, the lugs extend through supporting holes in an annular frame for mounting the liner segments thereto, with the bosses abutting the frame for maintaining a predetermined spacing therefrom.

  20. Obscuring Surface Anatomy in Volumetric Imaging Data

    PubMed Central

    Marcus, Daniel

    2012-01-01

    The identifying or sensitive anatomical features in MR and CT images used in research raise patient privacy concerns when such data are shared. In order to protect human subject privacy, we developed a method of anatomical surface modification and investigated the effects of such modification on image statistics and common neuroimaging processing tools. Common approaches to obscuring facial features typically remove large portions of the voxels. The approach described here focuses on blurring the anatomical surface instead, to avoid impinging on areas of interest and hard edges that can confuse processing tools. The algorithm proceeds by extracting a thin boundary layer containing surface anatomy from a region of interest. This layer is then “stretched” and “flattened” to fit into a thin “box” volume. After smoothing along a plane roughly parallel to anatomy surface, this volume is transformed back onto the boundary layer of the original data. The above method, named normalized anterior filtering, was coded in MATLAB and applied on a number of high resolution MR and CT scans. To test its effect on automated tools, we compared the output of selected common skull stripping and MR gain field correction methods used on unmodified and obscured data. With this paper, we hope to improve the understanding of the effect of surface deformation approaches on the quality of de-identified data and to provide a useful de-identification tool for MR and CT acquisitions. PMID:22968671

  1. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  2. Adaptation of a support vector machine algorithm for segmentation and visualization of retinal structures in volumetric optical coherence tomography data sets

    PubMed Central

    Zawadzki, Robert J.; Fuller, Alfred R.; Wiley, David F.; Hamann, Bernd; Choi, Stacey S.; Werner, John S.

    2008-01-01

    Recent developments in Fourier domain—optical coherence tomography (Fd-OCT) have increased the acquisition speed of current ophthalmic Fd-OCT instruments sufficiently to allow the acquisition of volumetric data sets of human retinas in a clinical setting. The large size and three-dimensional (3D) nature of these data sets require that intelligent data processing, visualization, and analysis tools are used to take full advantage of the available information. Therefore, we have combined methods from volume visualization, and data analysis in support of better visualization and diagnosis of Fd-OCT retinal volumes. Custom-designed 3D visualization and analysis software is used to view retinal volumes reconstructed from registered B-scans. We use a support vector machine (SVM) to perform semiautomatic segmentation of retinal layers and structures for subsequent analysis including a comparison of measured layer thicknesses. We have modified the SVM to gracefully handle OCT speckle noise by treating it as a characteristic of the volumetric data. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases. PMID:17867795

  3. Document segmentation via oblique cuts

    NASA Astrophysics Data System (ADS)

    Svendsen, Jeremy; Branzan-Albu, Alexandra

    2013-01-01

    This paper presents a novel solution for the layout segmentation of graphical elements in Business Intelligence documents. We propose a generalization of the recursive X-Y cut algorithm, which allows for cutting along arbitrary oblique directions. An intermediate processing step consisting of line and solid region removal is also necessary due to presence of decorative elements. The output of the proposed segmentation is a hierarchical structure which allows for the identification of primitives in pie and bar charts. The algorithm was tested on a database composed of charts from business documents. Results are very promising.

  4. Multiatlas segmentation as nonparametric regression.

    PubMed

    Awate, Suyash P; Whitaker, Ross T

    2014-09-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems. PMID:24802528

  5. An anatomy precourse enhances student learning in veterinary anatomy.

    PubMed

    McNulty, Margaret A; Stevens-Sparks, Cathryn; Taboada, Joseph; Daniel, Annie; Lazarus, Michelle D

    2016-07-01

    Veterinary anatomy is often a source of trepidation for many students. Currently professional veterinary programs, similar to medical curricula, within the United States have no admission requirements for anatomy as a prerequisite course. The purpose of the current study was to evaluate the impact of a week-long precourse in veterinary anatomy on both objective student performance and subjective student perceptions of the precourse educational methods. Incoming first year veterinary students in the Louisiana State University School of Veterinary Medicine professional curriculum were asked to participate in a free precourse before the start of the semester, covering the musculoskeletal structures of the canine thoracic limb. Students learned the material either via dissection only, instructor-led demonstrations only, or a combination of both techniques. Outcome measures included student performance on examinations throughout the first anatomy course of the professional curriculum as compared with those who did not participate in the precourse. This study found that those who participated in the precourse did significantly better on examinations within the professional anatomy course compared with those who did not participate. Notably, this significant improvement was also identified on the examination where both groups were exposed to the material for the first time together, indicating that exposure to a small portion of veterinary anatomy can impact learning of anatomical structures beyond the immediate scope of the material previously learned. Subjective data evaluation indicated that the precourse was well received and students preferred guided learning via demonstrations in addition to dissection as opposed to either method alone. Anat Sci Educ 9: 344-356. © 2015 American Association of Anatomists. PMID:26669269

  6. The place of surface anatomy in the medical literature and undergraduate anatomy textbooks.

    PubMed

    Azer, Samy A

    2013-01-01

    The aims of this review were to examine the place of surface anatomy in the medical literature, particularly the methods and approaches used in teaching surface and living anatomy and assess commonly used anatomy textbooks in regard to their surface anatomy contents. PubMed and MEDLINE databases were searched using the following keywords "surface anatomy," "living anatomy," "teaching surface anatomy," "bony landmarks," "peer examination" and "dermatomes". The percentage of pages covering surface anatomy in each textbook was calculated as well as the number of images covering surface anatomy. Clarity, quality and adequacy of surface anatomy contents was also examined. The search identified 22 research papers addressing methods used in teaching surface anatomy, 31 papers that can help in the improvement of surface anatomy curriculum, and 12 anatomy textbooks. These teaching methods included: body painting, peer volunteer surface anatomy, use of a living anatomy model, real time ultrasound, virtual (visible) human dissector (VHD), full body digital x-ray of cadavers (Lodox(®) Statscan(®) images) combined with palpating landmarks on peers and the cadaver, as well as the use of collaborative, contextual and self-directed learning. Nineteen of these studies were published in the period from 2006 to 2013. The 31 papers covered evidence-based and clinically-applied surface anatomy. The percentage of surface anatomy in textbooks' contents ranged from 0 to 6.2 with an average of 3.4%. The number of medical illustrations on surface anatomy varied from 0 to 135. In conclusion, although there has been a progressive increase in publications addressing methods used in teaching surface anatomy over the last six to seven years, most anatomy textbooks do not provide students with adequate information about surface anatomy. Only three textbooks provided a solid explanation and foundation of understanding surface anatomy. PMID:23650274

  7. Spinal Cord Segmentation by One Dimensional Normalized Template Matching: A Novel, Quantitative Technique to Analyze Advanced Magnetic Resonance Imaging Data.

    PubMed

    Cadotte, Adam; Cadotte, David W; Livne, Micha; Cohen-Adad, Julien; Fleet, David; Mikulis, David; Fehlings, Michael G

    2015-01-01

    Spinal cord segmentation is a developing area of research intended to aid the processing and interpretation of advanced magnetic resonance imaging (MRI). For example, high resolution three-dimensional volumes can be segmented to provide a measurement of spinal cord atrophy. Spinal cord segmentation is difficult due to the variety of MRI contrasts and the variation in human anatomy. In this study we propose a new method of spinal cord segmentation based on one-dimensional template matching and provide several metrics that can be used to compare with other segmentation methods. A set of ground-truth data from 10 subjects was manually-segmented by two different raters. These ground truth data formed the basis of the segmentation algorithm. A user was required to manually initialize the spinal cord center-line on new images, taking less than one minute. Template matching was used to segment the new cord and a refined center line was calculated based on multiple centroids within the segmentation. Arc distances down the spinal cord and cross-sectional areas were calculated. Inter-rater validation was performed by comparing two manual raters (n = 10). Semi-automatic validation was performed by comparing the two manual raters to the semi-automatic method (n = 10). Comparing the semi-automatic method to one of the raters yielded a Dice coefficient of 0.91 +/- 0.02 for ten subjects, a mean distance between spinal cord center lines of 0.32 +/- 0.08 mm, and a Hausdorff distance of 1.82 +/- 0.33 mm. The absolute variation in cross-sectional area was comparable for the semi-automatic method versus manual segmentation when compared to inter-rater manual segmentation. The results demonstrate that this novel segmentation method performs as well as a manual rater for most segmentation metrics. It offers a new approach to study spinal cord disease and to quantitatively track changes within the spinal cord in an individual case and across cohorts of subjects. PMID:26445367

  8. Automated area segmentation for ocean bottom surveys

    NASA Astrophysics Data System (ADS)

    Hyland, John C.; Smith, Cheryl M.

    2015-05-01

    In practice, environmental information about an ocean bottom area to be searched using SONAR is often known a priori to some coarse level of resolution. The SONAR search sensor then typically has a different performance characterization function for each environmental classification. Large ocean bottom surveys using search SONAR can pose some difficulties when the environmental conditions vary significantly over the search area because search planning tools cannot adequately segment the area into sub-regions of homogeneous search sensor performance. Such segmentation is critically important to unmanned search vehicles; homogenous bottom segmentation will result in more accurate predictions of search performance and area coverage rate. The Naval Surface Warfare Center, Panama City Division (NSWC PCD) has developed an automated area segmentation algorithm that subdivides the mission area under the constraint that the variation of the search sensor's performance within each sub-mission area cannot exceed a specified threshold, thereby creating sub-regions of homogeneous sensor performance. The algorithm also calculates a new, composite sensor performance function for each sub-mission area. The technique accounts for practical constraints such as enforcing a minimum sub-mission area size and requiring sub-mission areas to be rectangular. Segmentation occurs both across the rows and down the columns of the mission area. Ideally, mission planning should consider both segmentation directions and choose the one with the more favorable result. The Automated Area Segmentation Algorithm was tested using two a priori bottom segmentations: rectangular and triangular; and two search sensor configurations: a set of three bi-modal curves and a set of three uni-modal curves. For each of these four scenarios, the Automated Area Segmentation Algorithm automatically partitioned the mission area across rows and down columns to create regions with homogeneous sensor performance. The

  9. FISICO: Fast Image SegmentatIon COrrection

    PubMed Central

    Valenzuela, Waldo; Ferguson, Stephen J.; Ignasiak, Dominika; Diserens, Gaëlle; Häni, Levin; Wiest, Roland; Vermathen, Peter; Boesch, Chris

    2016-01-01

    Background and Purpose In clinical diagnosis, medical image segmentation plays a key role in the analysis of pathological regions. Despite advances in automatic and semi-automatic segmentation techniques, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a lower number of interactions, and a user-independent solution to reduce the time frame between image acquisition and diagnosis. Methods We present a new interactive method for correcting image segmentations. Our method provides 3D shape corrections through 2D interactions. This approach enables an intuitive and natural corrections of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle and knee joint segmentations from MR images. Results Experimental results show that full segmentation corrections could be performed within an average correction time of 5.5±3.3 minutes and an average of 56.5±33.1 user interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.02 for both anatomies. In addition, for users with different levels of expertise, our method yields a correction time and number of interaction decrease from 38±19.2 minutes to 6.4±4.3 minutes, and 339±157.1 to 67.7±39.6 interactions, respectively. PMID:27224061

  10. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119

  11. FIST: a fast interactive segmentation technique

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Bhotika, Rahul; Natanzon, Alexander

    2015-03-01

    Radiologists are required to read thousands of patient images every day, and any tools that can improve their workflow to help them make efficient and accurate measurements is of great value. Such an interactive tool must be intuitive to use, and we have found that users are accustomed to clicking on the contour of the object for segmentation and would like the final segmentation to pass through these points. The tool must also be fast to enable real-time interactive feedback. To meet these needs, we present a segmentation workflow that enables an intuitive method for fast interactive segmentation of 2D and 3D objects. Given simple user clicks on the contour of an object in one 2D view, the algorithm generates foreground and background seeds and computes foreground and background distributions that are used to segment the object in 2D. It then propagates the information to the two orthogonal planes in a 3D volume and segments all three 2D views. The automated segmentation is automatically updated as the user continues to add points around the contour, and the algorithm is re-run using the total set of points. Based on the segmented objects in these three views, the algorithm then computes a 3D segmentation of the object. This process requires only limited user interaction to segment complex shapes and significantly improves the workflow of the user.

  12. [Segmental neurofibromatosis].

    PubMed

    Zulaica, A; Peteiro, C; Pereiro, M; Pereiro Ferreiros, M; Quintas, C; Toribio, J

    1989-01-01

    Four cases of segmental neurofibromatosis (SNF) are reported. It is a rare entity considered to be a localized variant of neurofibromatosis (NF)-Riccardi's type V. Two cases are male and two female. The lesions are located to the head in a patient and the other three cases in the trunk. No family history nor transmission to progeny were manifested. The rest of the organs are undamaged. PMID:2502696

  13. Anatomy 1. Introduction to Human Anatomy: A Functional Approach.

    ERIC Educational Resources Information Center

    Silverman, Robert M.

    An introductory human anatomy course designed to provide the basic understanding of human structure necessary for further study in allied health and related fields is described. First, a general course description provides an overview; discusses the courses' place within the science curriculum, noting that it does not meet the general education…

  14. The Anatomy of Anatomy: A Review for Its Modernization

    ERIC Educational Resources Information Center

    Sugand, Kapil; Abrahams, Peter; Khurana, Ashish

    2010-01-01

    Anatomy has historically been a cornerstone in medical education regardless of nation or specialty. Until recently, dissection and didactic lectures were its sole pedagogy. Teaching methodology has been revolutionized with more reliance on models, imaging, simulation, and the Internet to further consolidate and enhance the learning experience.…

  15. Anatomy Adventure: A Board Game for Enhancing Understanding of Anatomy

    ERIC Educational Resources Information Center

    Anyanwu, Emeka G.

    2014-01-01

    Certain negative factors such as fear, loss of concentration and interest in the course, lack of confidence, and undue stress have been associated with the study of anatomy. These are factors most often provoked by the unusually large curriculum, nature of the course, and the psychosocial impact of dissection. As a palliative measure, Anatomy…

  16. Entangled decision forests and their application for semantic segmentation of CT images.

    PubMed

    Montillo, Albert; Shotton, Jamie; Winn, John; Iglesias, Juan Eugenio; Metaxas, Dimitri; Criminisi, Antonio

    2011-01-01

    This work addresses the challenging problem of simultaneously segmenting multiple anatomical structures in highly varied CT scans. We propose the entangled decision forest (EDF) as a new discriminative classifier which augments the state of the art decision forest, resulting in higher prediction accuracy and shortened decision time. Our main contribution is two-fold. First, we propose entangling the binary tests applied at each tree node in the forest, such that the test result can depend on the result of tests applied earlier in the same tree and at image points offset from the voxel to be classified. This is demonstrated to improve accuracy and capture long-range semantic context. Second, during training, we propose injecting randomness in a guided way, in which node feature types and parameters are randomly drawn from a learned (nonuniform) distribution. This further improves classification accuracy. We assess our probabilistic anatomy segmentation technique using a labeled database of CT image volumes of 250 different patients from various scan protocols and scanner vendors. In each volume, 12 anatomical structures have been manually segmented. The database comprises highly varied body shapes and sizes, a wide array of pathologies, scan resolutions, and diverse contrast agents. Quantitative comparisons with state of the art algorithms demonstrate both superior test accuracy and computational efficiency. PMID:21761656

  17. Multi-contrast submillimetric 3 Tesla hippocampal subfield segmentation protocol and dataset

    PubMed Central

    Kulaga-Yoskovitz, Jessie; Bernhardt, Boris C.; Hong, Seok-Jun; Mansi, Tommaso; Liang, Kevin E.; van der Kouwe, Andre J.W.; Smallwood, Jonathan; Bernasconi, Andrea; Bernasconi, Neda

    2015-01-01

    The hippocampus is composed of distinct anatomical subregions that participate in multiple cognitive processes and are differentially affected in prevalent neurological and psychiatric conditions. Advances in high-field MRI allow for the non-invasive identification of hippocampal substructure. These approaches, however, demand time-consuming manual segmentation that relies heavily on anatomical expertise. Here, we share manual labels and associated high-resolution MRI data (MNI-HISUB25; submillimetric T1- and T2-weighted images, detailed sequence information, and stereotaxic probabilistic anatomical maps) based on 25 healthy subjects. Data were acquired on a widely available 3 Tesla MRI system using a 32 phased-array head coil. The protocol divided the hippocampal formation into three subregions: subicular complex, merged Cornu Ammonis 1, 2 and 3 (CA1-3) subfields, and CA4-dentate gyrus (CA4-DG). Segmentation was guided by consistent intensity and morphology characteristics of the densely myelinated molecular layer together with few geometry-based boundaries flexible to overall mesiotemporal anatomy, and achieved excellent intra-/inter-rater reliability (Dice index ≥90/87%). The dataset can inform neuroimaging assessments of the mesiotemporal lobe and help to develop segmentation algorithms relevant for basic and clinical neurosciences. PMID:26594378

  18. Automatic Segmentation of Drosophila Neural Compartments Using GAL4 Expression Data Reveals Novel Visual Pathways.

    PubMed

    Panser, Karin; Tirian, Laszlo; Schulze, Florian; Villalba, Santiago; Jefferis, Gregory S X E; Bühler, Katja; Straw, Andrew D

    2016-08-01

    Identifying distinct anatomical structures within the brain and developing genetic tools to target them are fundamental steps for understanding brain function. We hypothesize that enhancer expression patterns can be used to automatically identify functional units such as neuropils and fiber tracts. We used two recent, genome-scale Drosophila GAL4 libraries and associated confocal image datasets to segment large brain regions into smaller subvolumes. Our results (available at https://strawlab.org/braincode) support this hypothesis because regions with well-known anatomy, namely the antennal lobes and central complex, were automatically segmented into familiar compartments. The basis for the structural assignment is clustering of voxels based on patterns of enhancer expression. These initial clusters are agglomerated to make hierarchical predictions of structure. We applied the algorithm to central brain regions receiving input from the optic lobes. Based on the automated segmentation and manual validation, we can identify and provide promising driver lines for 11 previously identified and 14 novel types of visual projection neurons and their associated optic glomeruli. The same strategy can be used in other brain regions and likely other species, including vertebrates. PMID:27426516

  19. Segmentation of Unstructured Datasets

    NASA Technical Reports Server (NTRS)

    Bhat, Smitha

    1996-01-01

    Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.

  20. Efficient threshold for volumetric segmentation

    NASA Astrophysics Data System (ADS)

    Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel

    2015-07-01

    Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.

  1. Out-of-atlas likelihood estimation using multi-atlas segmentation

    PubMed Central

    Asman, Andrew J.; Chambless, Lola B.; Thompson, Reid C.; Landman, Bennett A.

    2013-01-01

    Purpose: Multi-atlas segmentation has been shown to be highly robust and accurate across an extraordinary range of potential applications. However, it is limited to the segmentation of structures that are anatomically consistent across a large population of potential target subjects (i.e., multi-atlas segmentation is limited to “in-atlas” applications). Herein, the authors propose a technique to determine the likelihood that a multi-atlas segmentation estimate is representative of the problem at hand, and, therefore, identify anomalous regions that are not well represented within the atlases. Methods: The authors derive a technique to estimate the out-of-atlas (OOA) likelihood for every voxel in the target image. These estimated likelihoods can be used to determine and localize the probability of an abnormality being present on the target image. Results: Using a collection of manually labeled whole-brain datasets, the authors demonstrate the efficacy of the proposed framework on two distinct applications. First, the authors demonstrate the ability to accurately and robustly detect malignant gliomas in the human brain—an aggressive class of central nervous system neoplasms. Second, the authors demonstrate how this OOA likelihood estimation process can be used within a quality control context for diffusion tensor imaging datasets to detect large-scale imaging artifacts (e.g., aliasing and image shading). Conclusions: The proposed OOA likelihood estimation framework shows great promise for robust and rapid identification of brain abnormalities and imaging artifacts using only weak dependencies on anomaly morphometry and appearance. The authors envision that this approach would allow for application-specific algorithms to focus directly on regions of high OOA likelihood, which would (1) reduce the need for human intervention, and (2) reduce the propensity for false positives. Using the dual perspective, this technique would allow for algorithms to focus on

  2. Shape regression machine and efficient segmentation of left ventricle endocardium from 2D B-mode echocardiogram.

    PubMed

    Zhou, Shaohua Kevin

    2010-08-01

    We present a machine learning approach called shape regression machine (SRM) for efficient segmentation of an anatomic structure that exhibits a deformable shape in a medical image, e.g., left ventricle endocardial wall in an echocardiogram. The SRM achieves efficient segmentation via statistical learning of the interrelations among shape, appearance, and anatomy, which are exemplified by an annotated database. The SRM is a two-stage approach. In the first stage that estimates a rigid shape to solve an automatic initialization problem, it derives a regression solution to object detection that needs just one scan in principle and a sparse set of scans in practice, avoiding the exhaustive scanning required by the state-of-the-art classification-based detection approach while yielding comparable detection accuracy. In the second stage that estimates the nonrigid shape, it again learns a nonlinear regressor to directly associate nonrigid shape with image appearance. The underpinning of both stages is a novel image-based boosting ridge regression (IBRR) method that enables multivariate, nonlinear modeling and accommodates fast evaluation. We demonstrate the efficiency and effectiveness of the SRM using experiments on segmenting the left ventricle endocardium from a B-mode echocardiogram of apical four chamber view. The proposed algorithm is able to automatically detect and accurately segment the LV endocardial border in about 120ms. PMID:20494610

  3. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    SciTech Connect

    Ahunbay, E; Li, X; Moreau, M

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.

  4. Modeling segmentation performance in NV-IPM

    NASA Astrophysics Data System (ADS)

    Lies, Micah J.; Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Imaging sensors produce images whose primary use is to convey information to human operators. However, their proliferation has resulted in an overload of information. As a result, computational algorithms are being increasingly implemented to simplify an operator's task or to eliminate the human operator altogether. Predicting the effect of algorithms on task performance is currently cumbersome requiring estimates of the effects of an algorithm on the blurring and noise, and "shoe-horning" these effects into existing models. With the increasing use of automated algorithms with imaging sensors, a fully integrated approach is desired. While specific implementation algorithms differ, general tasks can be identified that form building blocks of a wide range of possible algorithms. Those tasks are segmentation of objects from the spatio-temporal background, object tracking over time, feature extraction, and transformation of features into human usable information. In this paper research is conducted with the purpose of developing a general performance model for segmentation algorithms based on image quality. A database of pristine imagery has been developed in which there is a wide variety of clearly defined regions with respect to shape, size, and inherent contrast. Both synthetic and "natural" images make up the database. Each image is subjected to various amounts of blur and noise. Metrics for the accuracy of segmentation have been developed and measured for each image and segmentation algorithm. Using the computed metric values and the known values of blur and noise, a model of performance for segmentation is being developed. Preliminary results are reported.

  5. On the Anatomy of Understanding

    ERIC Educational Resources Information Center

    Wilhelmsson, Niklas; Dahlgren, Lars Owe; Hult, Hakan; Josephson, Anna

    2011-01-01

    In search for the nature of understanding of basic science in a clinical context, eight medical students were interviewed, with a focus on their view of the discipline of anatomy, in their fourth year of study. Interviews were semi-structured and took place just after the students had finished their surgery rotations. Phenomenographic analysis was…

  6. Anatomy of Hepatic Resectional Surgery.

    PubMed

    Lowe, Michael C; D'Angelica, Michael I

    2016-04-01

    Liver anatomy can be variable, and understanding of anatomic variations is crucial to performing hepatic resections, particularly parenchymal-sparing resections. Anatomic knowledge is a critical prerequisite for effective hepatic resection with minimal blood loss, parenchymal preservation, and optimal oncologic outcome. Each anatomic resection has pitfalls, about which the operating surgeon should be aware and comfortable managing intraoperatively. PMID:27017858

  7. Functional Anatomy of the Shoulder

    PubMed Central

    Terry, Glenn C.; Chopp, Thomas M.

    2000-01-01

    Objective: Movements of the human shoulder represent the result of a complex dynamic interplay of structural bony anatomy and biomechanics, static ligamentous and tendinous restraints, and dynamic muscle forces. Injury to 1 or more of these components through overuse or acute trauma disrupts this complex interrelationship and places the shoulder at increased risk. A thorough understanding of the functional anatomy of the shoulder provides the clinician with a foundation for caring for athletes with shoulder injuries. Data Sources: We searched MEDLINE for the years 1980 to 1999, using the key words “shoulder,” “anatomy,” “glenohumeral joint,” “acromioclavicular joint,” “sternoclavicular joint,” “scapulothoracic joint,” and “rotator cuff.” Data Synthesis: We examine human shoulder movement by breaking it down into its structural static and dynamic components. Bony anatomy, including the humerus, scapula, and clavicle, is described, along with the associated articulations, providing the clinician with the structural foundation for understanding how the static ligamentous and dynamic muscle forces exert their effects. Commonly encountered athletic injuries are discussed from an anatomical standpoint. Conclusions/Recommendations: Shoulder injuries represent a significant proportion of athletic injuries seen by the medical provider. A functional understanding of the dynamic interplay of biomechanical forces around the shoulder girdle is necessary and allows for a more structured approach to the treatment of an athlete with a shoulder injury. PMID:16558636

  8. Curriculum Guidelines for Microscopic Anatomy.

    ERIC Educational Resources Information Center

    Journal of Dental Education, 1993

    1993-01-01

    The American Association of Dental Schools' guidelines for curricula in microscopic anatomy offer an overview of the histology curriculum, note primary educational goals, outline specific content for general and oral histology, suggest prerequisites, and make recommendations for sequencing. Appropriate faculty and facilities are also suggested.…

  9. Anatomy of trisomy 12.

    PubMed

    Roberts, Wallisa; Zurada, Anna; Zurada-ZieliŃSka, Agnieszka; Gielecki, Jerzy; Loukas, Marios

    2016-07-01

    Trisomy 12 is a rare aneuploidy and fetuses with this defect tend to spontaneously abort. However, mosaicism allows this anomaly to manifest itself in live births. Due to the fact that mosaicism represents a common genetic abnormality, trisomy 12 is encountered more frequently than expected at a rate of 1 in 500 live births. Thus, it is imperative that medical practitioners are aware of this aneuploidy. Moreover, this genetic disorder may result from a complete or partial duplication of chromosome 12. A partial duplication may refer to a specific segment on the chromosome, or one of the arms. On the other hand, a complete duplication refers to duplication of both arms of chromosome 12. The combination of mosaicism and the variable duplication sites has led to variable phenotypes ranging from normal phenotype to Potter sequence to gross physical defects of the various organ systems. This article provides a review of the common anatomical variation of the different types of trisomy 12. This review revealed that further documentation is needed for trisomy 12q and complete trisomy 12 to clearly delineate the constellation of anomalies that characterize each genetic defect. Clin. Anat. 29:633-637, 2016. © 2016 Wiley Periodicals, Inc. PMID:27087350

  10. Lymphatic anatomy and biomechanics

    PubMed Central

    Negrini, Daniela; Moriondo, Andrea

    2011-01-01

    Abstract Lymph formation is driven by hydraulic pressure gradients developing between the interstitial tissue and the lumen of initial lymphatics. While in vessels equipped with lymphatic smooth muscle cells these gradients are determined by well-synchronized spontaneous contractions of vessel segments, initial lymphatics devoid of smooth muscles rely on tissue motion to form lymph and propel it along the network. Lymphatics supplying highly moving tissues, such as skeletal muscle, diaphragm or thoracic tissues, undergo cyclic compression and expansion of their lumen imposed by local stresses arising in the tissue as a consequence of cardiac and respiratory activities. Active muscle contraction and not passive tissue displacement is required to support an efficient lymphatic drainage, as suggested by the fact that the respiratory activity promotes lymph formation during spontaneous, but not mechanical ventilation. The mechanical properties of the lymphatic wall and of the surrounding tissue also play an important role in lymphatic function. Modelling of stress distribution in the lymphatic wall suggests that compliant vessels behave as reservoirs accommodating absorbed interstitial fluid, while lymphatics with stiffer walls, taking advantage of a more efficient transmission of tissue stresses to the lymphatic lumen, propel fluid through the lumen of the lymphatic circuit. PMID:21486777

  11. Segmental neurofibromatosis.

    PubMed

    Sobjanek, Michał; Dobosz-Kawałko, Magdalena; Michajłowski, Igor; Pęksa, Rafał; Nowicki, Roman

    2014-12-01

    Segmental neurofibromatosis or type V neurofibromatosis is a rare genodermatosis characterized by neurofibromas, café-au-lait spots and neurofibromas limited to a circumscribed body region. The disease may be associated with systemic involvement and malignancies. The disorder has not been reported yet in the Polish medical literature. A 63-year-old Caucasian woman presented with a 20-year history of multiple, flesh colored, dome-shaped, soft to firm nodules situated in the right lumbar region. A histopathologic evaluation of three excised tumors revealed neurofibromas. No neurological and ophthalmologic symptoms of neurofibromatosis were diagnosed. PMID:25610358

  12. Segmental neurofibromatosis.

    PubMed

    Adigun, Chris G; Stein, Jennifer

    2011-01-01

    A 59-year-old man presented for evaluation and excision of non-tender, fleshy nodules that were arranged in a dermatomal distribution from the left side of the chest to the left axilla. A biopsy specimen of a nodule was consistent with a neurofibroma. Owing to the lack of other cutaneous findings, the lack of a family history of neurofibromatosis, and the dermatomal distribution of the neurofibromas, this patient met the criteria for a diagnosis of segmental neurofibromatosis (SNF) according to Riccardi's definition of SNF and classification of neurofibromatosis. Because the patient has no complications of neurofibromatosis 1 no medical treatment is required. PMID:22031651

  13. Segmental neurofibromatosis

    PubMed Central

    Dobosz-Kawałko, Magdalena; Michajłowski, Igor; Pęksa, Rafał; Nowicki, Roman

    2014-01-01

    Segmental neurofibromatosis or type V neurofibromatosis is a rare genodermatosis characterized by neurofibromas, café-au-lait spots and neurofibromas limited to a circumscribed body region. The disease may be associated with systemic involvement and malignancies. The disorder has not been reported yet in the Polish medical literature. A 63-year-old Caucasian woman presented with a 20-year history of multiple, flesh colored, dome-shaped, soft to firm nodules situated in the right lumbar region. A histopathologic evaluation of three excised tumors revealed neurofibromas. No neurological and ophthalmologic symptoms of neurofibromatosis were diagnosed. PMID:25610358

  14. MO-C-17A-11: A Segmentation and Point Matching Enhanced Deformable Image Registration Method for Dose Accumulation Between HDR CT Images

    SciTech Connect

    Zhen, X; Chen, H; Zhou, L; Yan, H; Jiang, S; Jia, X; Gu, X; Mell, L; Yashar, C; Cervino, L

    2014-06-15

    Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the random walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no

  15. Hybrid image segmentation using watersheds

    NASA Astrophysics Data System (ADS)

    Haris, Kostas; Efstratiadis, Serafim N.; Maglaveras, Nicos; Pappas, Costas

    1996-02-01

    A hybrid image segmentation algorithm is proposed which combines edge- and region-based techniques through the morphological algorithm of watersheds. The algorithm consists of the following steps: (1) edge-preserving statistical noise reduction, (2) gradient approximation, (3) detection of watersheds on gradient magnitude image, and (4) hierarchical region merging (HRM) in order to get semantically meaningful segmentations. The HRM process uses the region adjacency graph (RAG) representation of the image regions. At each step, the most similar pair of regions is determined (minimum cost RAG edge), the regions are merged and the RAG is updated. Traditionally, the above is implemented by storing all the RAG edges in a priority queue (heap). We propose a significantly faster algorithm which maintains an additional graph, the most similar neighbor graph, through which the priority queue size and processing time are drastically reduced. The final segmentation is an image partition which, through the RAG, provides information that can be used by knowledge-based high level processes, i.e. recognition. In addition, this region based representation provides one-pixel wide, closed, and accurately localized contours/surfaces. Due to the small number of free parameters, the algorithm can be quite effectively used in interactive image processing. Experimental results obtained with 2D MR images are presented.

  16. Heuristic segmentation of a nonstationary time series

    NASA Astrophysics Data System (ADS)

    Fukuda, Kensuke; Eugene Stanley, H.; Nunes Amaral, Luís A.

    2004-02-01

    Many phenomena, both natural and human influenced, give rise to signals whose statistical properties change under time translation, i.e., are nonstationary. For some practical purposes, a nonstationary time series can be seen as a concatenation of stationary segments. However, the exact segmentation of a nonstationary time series is a hard computational problem which cannot be solved exactly by existing methods. For this reason, heuristic methods have been proposed. Using one such method, it has been reported that for several cases of interest—e.g., heart beat data and Internet traffic fluctuations—the distribution of durations of these stationary segments decays with a power-law tail. A potential technical difficulty that has not been thoroughly investigated is that a nonstationary time series with a (scalefree) power-law distribution of stationary segments is harder to segment than other nonstationary time series because of the wider range of possible segment lengths. Here, we investigate the validity of a heuristic segmentation algorithm recently proposed by Bernaola-Galván et al. [Phys. Rev. Lett. 87, 168105 (2001)] by systematically analyzing surrogate time series with different statistical properties. We find that if a given nonstationary time series has stationary periods whose length is distributed as a power law, the algorithm can split the time series into a set of stationary segments with the correct statistical properties. We also find that the estimated power-law exponent of the distribution of stationary-segment lengths is affected by (i) the minimum segment length and (ii) the ratio R≡σɛ/σx¯, where σx¯ is the standard deviation of the mean values of the segments and σɛ is the standard deviation of the fluctuations within a segment. Furthermore, we determine that the performance of the algorithm is generally not affected by uncorrelated noise spikes or by weak long-range temporal correlations of the fluctuations within segments.

  17. Anatomy of Teaching Anatomy: Do Prosected Cross Sections Improve Students Understanding of Spatial and Radiological Anatomy?

    PubMed Central

    Vithoosan, S.; Kokulan, S.; Dissanayake, M. M.; Dissanayake, Vajira; Jayasekara, Rohan

    2016-01-01

    Introduction. Cadaveric dissections and prosections have traditionally been part of undergraduate medical teaching. Materials and Methods. Hundred and fifty-nine first-year students in the Faculty of Medicine, University of Colombo, were invited to participate in the above study. Students were randomly allocated to two age and gender matched groups. Both groups were exposed to identical series of lectures regarding anatomy of the abdomen and conventional cadaveric prosections of the abdomen. The test group (n = 77, 48.4%) was also exposed to cadaveric cross-sectional slices of the abdomen to which the control group (n = 82, 51.6%) was blinded. At the end of the teaching session both groups were assessed by using their performance in a timed multiple choice question paper as well as ability to identify structures in abdominal CT films. Results. Scores for spatial and radiological anatomy were significantly higher among the test group when compared with the control group (P < 0.05, CI 95%). Majority of the students in both control and test groups agreed that cadaveric cross section may be useful for them to understand spatial and radiological anatomy. Conclusion. Introduction of cadaveric cross-sectional prosections may help students to understand spatial and radiological anatomy better. PMID:27579181

  18. Anatomy of a Bird

    NASA Astrophysics Data System (ADS)

    2007-12-01

    Using ESO's Very Large Telescope, an international team of astronomers [1] has discovered a stunning rare case of a triple merger of galaxies. This system, which astronomers have dubbed 'The Bird' - albeit it also bears resemblance with a cosmic Tinker Bell - is composed of two massive spiral galaxies and a third irregular galaxy. ESO PR Photo 55a/07 ESO PR Photo 55a/07 The Tinker Bell Triplet The galaxy ESO 593-IG 008, or IRAS 19115-2124, was previously merely known as an interacting pair of galaxies at a distance of 650 million light-years. But surprises were revealed by observations made with the NACO instrument attached to ESO's VLT, which peered through the all-pervasive dust clouds, using adaptive optics to resolve the finest details [2]. Underneath the chaotic appearance of the optical Hubble images - retrieved from the Hubble Space Telescope archive - the NACO images show two unmistakable galaxies, one a barred spiral while the other is more irregular. The surprise lay in the clear identification of a third, clearly separate component, an irregular, yet fairly massive galaxy that seems to be forming stars at a frantic rate. "Examples of mergers of three galaxies of roughly similar sizes are rare," says Petri Väisänen, lead author of the paper reporting the results. "Only the near-infrared VLT observations made it possible to identify the triple merger nature of the system in this case." Because of the resemblance of the system to a bird, the object was dubbed as such, with the 'head' being the third component, and the 'heart' and 'body' making the two major galaxy nuclei in-between of tidal tails, the 'wings'. The latter extend more than 100,000 light-years, or the size of our own Milky Way. ESO PR Photo 55b/07 ESO PR Photo 55b/07 Anatomy of a Bird Subsequent optical spectroscopy with the new Southern African Large Telescope, and archive mid-infrared data from the NASA Spitzer space observatory, confirmed the separate nature of the 'head', but also added

  19. Monte Carlo simulated coronary angiograms of realistic anatomy and pathology models

    NASA Astrophysics Data System (ADS)

    Kyprianou, Iacovos S.; Badal, Andreu; Badano, Aldo; Banh, Diemphuc; Freed, Melanie; Myers, Kyle J.; Thompson, Laura

    2007-03-01

    We have constructed a fourth generation anthropomorphic phantom which, in addition to the realistic description of the human anatomy, includes a coronary artery disease model. A watertight version of the NURBS-based Cardiac-Torso (NCAT) phantom was generated by converting the individual NURBS surfaces of each organ into closed, manifold and non-self-intersecting tessellated surfaces. The resulting 330 surfaces of the phantom organs and tissues are now comprised of ~5×10 6 triangles whose size depends on the individual organ surface normals. A database of the elemental composition of each organ was generated, and material properties such as density and scattering cross-sections were defined using PENELOPE. A 300 μm resolution model of a heart with 55 coronary vessel segments was constructed by fitting smooth triangular meshes to a high resolution cardiac CT scan we have segmented, and was consequently registered inside the torso model. A coronary artery disease model that uses hemodynamic properties such as blood viscosity and resistivity was used to randomly place plaque within the artery tree. To generate x-ray images of the aforementioned phantom, our group has developed an efficient Monte Carlo radiation transport code based on the subroutine package PENELOPE, which employs an octree spatial data-structure that stores and traverses the phantom triangles. X-ray angiography images were generated under realistic imaging conditions (90 kVp, 10° Wanode spectra with 3 mm Al filtration, ~5×10 11 x-ray source photons, and 10% per volume iodine contrast in the coronaries). The images will be used in an optimization algorithm to select the optimal technique parameters for a variety of imaging tasks.

  20. Salient Segmentation of Medical Time Series Signals

    PubMed Central

    Woodbridge, Jonathan; Lan, Mars; Sarrafzadeh, Majid; Bui, Alex

    2016-01-01

    Searching and mining medical time series databases is extremely challenging due to large, high entropy, and multidimensional datasets. Traditional time series databases are populated using segments extracted by a sliding window. The resulting database index contains an abundance of redundant time series segments with little to no alignment. This paper presents the idea of “salient segmentation”. Salient segmentation is a probabilistic segmentation technique for populating medical time series databases. Segments with the lowest probabilities are considered salient and are inserted into the index. The resulting index has little redundancy and is composed of aligned segments. This approach reduces index sizes by more than 98% over conventional sliding window techniques. Furthermore, salient segmentation can reduce redundancy in motif discovery algorithms by more than 85%, yielding a more succinct representation of a time series signal.

  1. Adaptive image segmentation by quantization

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Yun, David Y.

    1992-12-01

    Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.

  2. Anatomy adventure: a board game for enhancing understanding of anatomy.

    PubMed

    Anyanwu, Emeka G

    2014-01-01

    Certain negative factors such as fear, loss of concentration and interest in the course, lack of confidence, and undue stress have been associated with the study of anatomy. These are factors most often provoked by the unusually large curriculum, nature of the course, and the psychosocial impact of dissection. As a palliative measure, Anatomy Adventure, a board game on anatomy was designed to reduce some of these pressures, emphasize student centered and collaborative learning styles, and add fun to the process of learning while promoting understanding and retention of the subject. To assess these objectives, 95 out of over 150 medical and dental students who expressed willingness to be part of the study were recruited and divided into a Game group and a Non-game group. A pretest written examination was given to both groups, participants in the Game group were allowed to play the game for ten days, after which a post-test examination was also given. A 20-item questionnaire rated on a three-point scale to access student's perception of the game was given to the game group. The post-test scores of the game group were significantly higher (P < 0.05) than those of the non-game counterparts. Also the post-test score of the game based group was significantly better (P < 0.05) than their pretest. The students in their feedback noted in very high proportions that the game was interesting, highly informative, encouraged team work, improved their attitude, and perception to gross anatomy. PMID:23878076

  3. AUTOMATIC SEGMENTATION OF PELVIS FOR BRACHYTHERAPY OF PROSTATE.

    PubMed

    Kardell, M; Magnusson, M; Sandborg, M; Alm Carlsson, G; Jeuthe, J; Malusek, A

    2016-06-01

    Advanced model-based iterative reconstruction algorithms in quantitative computed tomography (CT) perform automatic segmentation of tissues to estimate material properties of the imaged object. Compared with conventional methods, these algorithms may improve quality of reconstructed images and accuracy of radiation treatment planning. Automatic segmentation of tissues is, however, a difficult task. The aim of this work was to develop and evaluate an algorithm that automatically segments tissues in CT images of the male pelvis. The newly developed algorithm (MK2014) combines histogram matching, thresholding, region growing, deformable model and atlas-based registration techniques for the segmentation of bones, adipose tissue, prostate and muscles in CT images. Visual inspection of segmented images showed that the algorithm performed well for the five analysed images. The tissues were identified and outlined with accuracy sufficient for the dual-energy iterative reconstruction algorithm whose aim is to improve the accuracy of radiation treatment planning in brachytherapy of the prostate. PMID:26567322

  4. Multiscale Segmentation of Polarimetric SAR Image Based on Srm Superpixels

    NASA Astrophysics Data System (ADS)

    Lang, F.; Yang, J.; Wu, L.; Li, D.

    2016-06-01

    Multi-scale segmentation of remote sensing image is more systematic and more convenient for the object-oriented image analysis compared to single-scale segmentation. However, the existing pixel-based polarimetric SAR (PolSAR) image multi-scale segmentation algorithms are usually inefficient and impractical. In this paper, we proposed a superpixel-based binary partition tree (BPT) segmentation algorithm by combining the generalized statistical region merging (GSRM) algorithm and the BPT algorithm. First, superpixels are obtained by setting a maximum region number threshold to GSRM. Then, the region merging process of the BPT algorithm is implemented based on superpixels but not pixels. The proposed algorithm inherits the advantages of both GSRM and BPT. The operation efficiency is obviously improved compared to the pixel-based BPT segmentation. Experiments using the Lband ESAR image over the Oberpfaffenhofen test site proved the effectiveness of the proposed method.

  5. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  6. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  7. [Surgical anatomy of the nose].

    PubMed

    Nguyen, P S; Bardot, J; Duron, J B; Jallut, Y; Aiach, G

    2014-12-01

    Thorough knowledge of the anatomy of the nose is an essential prerequisite for preoperative analysis and the understanding of surgical techniques. Like a tent supported by its frame, the nose is an osteo-chondral structure covered by a peri-chondroperiosteal envelope, muscle and cutaneous covering tissues. For didactic reasons, we have chosen to treat this chapter in the form of comments from eight key configurations that the surgeon should acquire before performing rhinoplasty. PMID:25159815

  8. Anatomy of the infant head

    SciTech Connect

    Bosma, J.F.

    1986-01-01

    This text is mainly an atlas of illustration representing the dissection of the head and upper neck of the infant. It was prepared by the author over a 20-year period. The commentary compares the anatomy of the near-term infant with that of a younger fetus, child, and adult. As the author indicates, the dearth of anatomic information about postnatal anatomic changes represents a considerable handicap to those imaging infants. In part 1 of the book, anatomy is related to physiologic performance involving the pharynx, larynx, and mouth. Sequential topics involve the regional anatomy of the head (excluding the brain), the skeleton of the cranium, the nose, orbit, mouth, larynx, pharynx, and ear. To facilitate use of this text as a reference, the illustrations and text on individual organs are considered separately (i.e., the nose, the orbit, the eye, the mouth, the larynx, the pharynx, and the ear). Each part concerned with a separate organ includes materials from the regional illustrations contained in part 2 and from the skeleton, which is treated in part 3. Also included in a summary of the embryologic and fetal development of the organ.

  9. Medical discourse in pathological anatomy.

    PubMed

    Moskalenko, R; Tatsenko, N; Romanyuk, A; Perelomova, O; Moskalenko, Yu

    2012-05-01

    The paper is devoted to the peculiarities of medical discourse in pathological anatomy as coherent speech and as a linguistic correlate of medical practice taking into account the analysis of its strategies and tactics. The purpose of the paper is to provide a multifaceted analysis of the speech strategies and tactics of pathological anatomy discourse and ways of their implementation. The main strategies of medical discourse in pathological anatomy are an anticipating strategy, a diagnosing strategy and an explaining one. The supporting strategies are pragmatic, conversational and a rhetorical one. The pragmatic strategy is implemented through contact establishing tactics, the conversational one - with the help of control tactics, the rhetorical one - with the help of attention correction tactics. The above mentioned tactics and strategies are used in the distinguishing of major, closely interrelated strategies: "the contact strategy" (to establish contact with a patient's relatives - phatic replicas of greeting and addressing) and "the strategy of explanation" (used in the practice of a pathologist for a detailed explanation of the reasons of a patient's death). The ethic aspect of speech conduct of a doctor-pathologist is analyzed. PMID:22870841

  10. Seismic volumetric flattening and segmentation

    NASA Astrophysics Data System (ADS)

    Lomask, Jesse

    Two novel algorithms provide seismic interpretation solutions that use the full dimensionality of the data. The first is volumetric flattening and the second is image segmentation for tracking salt boundaries. Volumetric flattening is an efficient full-volume automatic dense-picking method applied to seismic data. First local dips (step-outs) are calculated over the entire seismic volume. The dips are then resolved into time shifts (or depth shifts) in a least-squares sense. To handle faults (discontinuous reflections), I apply a weighted inversion scheme. Additional information is incorporated in this flattening algorithm as geological constraints. The method is tested successfully on both synthetic and field data sets of varying degrees of complexity including salt piercements, angular unconformities, and laterally limited faults. The second full-volume interpretation method uses normalized cuts image segmentation to track salt interfaces. I apply a modified version of the normalized cuts image segmentation (NCIS) method to partition seismic images along salt interfaces. The method is capable of tracking interfaces that are not continuous, where conventional horizon tracking algorithms may fail. This method partitions the seismic image into two groups. One group is inside the salt body and the other is outside. Where the two groups meet is the salt boundary. By imposing bounds and by distributing the algorithm on a parallel cluster, I significantly increase efficiency and robustness. This method is demonstrated to be effective on both 2D and 3D seismic data sets.

  11. Segmenting images analytically in shape space

    NASA Astrophysics Data System (ADS)

    Rathi, Yogesh; Dambreville, Samuel; Niethammer, Marc; Malcolm, James; Levitt, James; Shenton, Martha E.; Tannenbaum, Allen

    2008-03-01

    This paper presents a novel analytic technique to perform shape-driven segmentation. In our approach, shapes are represented using binary maps, and linear PCA is utilized to provide shape priors for segmentation. Intensity based probability distributions are then employed to convert a given test volume into a binary map representation, and a novel energy functional is proposed whose minimum can be analytically computed to obtain the desired segmentation in the shape space. We compare the proposed method with the log-likelihood based energy to elucidate some key differences. Our algorithm is applied to the segmentation of brain caudate nucleus and hippocampus from MRI data, which is of interest in the study of schizophrenia and Alzheimer's disease. Our validation (we compute the Hausdorff distance and the DICE coefficient between the automatic segmentation and ground-truth) shows that the proposed algorithm is very fast, requires no initialization and outperforms the log-likelihood based energy.

  12. Automatic setae segmentation from Chaetoceros microscopic images.

    PubMed

    Zheng, Haiyong; Zhao, Hongmiao; Sun, Xue; Gao, Huihui; Ji, Guangrong

    2014-09-01

    A novel image processing model Grayscale Surface Direction Angle Model (GSDAM) is presented and the algorithm based on GSDAM is developed to segment setae from Chaetoceros microscopic images. The proposed model combines the setae characteristics of the microscopic images with the spatial analysis of image grayscale surface to detect and segment the direction thin and long setae from the low contrast background as well as noise which may make the commonly used segmentation methods invalid. The experimental results show that our algorithm based on GSDAM outperforms the boundary-based and region-based segmentation methods Canny edge detector, iterative threshold selection, Otsu's thresholding, minimum error thresholding, K-means clustering, and marker-controlled watershed on the setae segmentation more accurately and completely. PMID:24913015

  13. Natural Language Processing: Word Recognition without Segmentation.

    ERIC Educational Resources Information Center

    Saeed, Khalid; Dardzinska, Agnieszka

    2001-01-01

    Discussion of automatic recognition of hand and machine-written cursive text using the Arabic alphabet focuses on an algorithm for word recognition. Describes results of testing words for recognition without segmentation and considers the algorithms' use for words of different fonts and for processing whole sentences. (Author/LRW)

  14. Real-Time Automatic Artery Segmentation, Reconstruction and Registration for Ultrasound-Guided Regional Anaesthesia of the Femoral Nerve.

    PubMed

    Smistad, Erik; Lindseth, Frank

    2016-03-01

    The goal is to create an assistant for ultrasound- guided femoral nerve block. By segmenting and visualizing the important structures such as the femoral artery, we hope to improve the success of these procedures. This article is the first step towards this goal and presents novel real-time methods for identifying and reconstructing the femoral artery, and registering a model of the surrounding anatomy to the ultrasound images. The femoral artery is modelled as an ellipse. The artery is first detected by a novel algorithm which initializes the artery tracking. This algorithm is completely automatic and requires no user interaction. Artery tracking is achieved with a Kalman filter. The 3D artery is reconstructed in real-time with a novel algorithm and a tracked ultrasound probe. A mesh model of the surrounding anatomy was created from a CT dataset. Registration of this model is achieved by landmark registration using the centerpoints from the artery tracking and the femoral artery centerline of the model. The artery detection method was able to automatically detect the femoral artery and initialize the tracking in all 48 ultrasound sequences. The tracking algorithm achieved an average dice similarity coefficient of 0.91, absolute distance of 0.33 mm, and Hausdorff distance 1.05 mm. The mean registration error was 2.7 mm, while the average maximum error was 12.4 mm. The average runtime was measured to be 38, 8, 46 and 0.2 milliseconds for the artery detection, tracking, reconstruction and registration methods respectively. PMID:26513782

  15. Iterative Vessel Segmentation of Fundus Images.

    PubMed

    Roychowdhury, Sohini; Koozekanani, Dara D; Parhi, Keshab K

    2015-07-01

    This paper presents a novel unsupervised iterative blood vessel segmentation algorithm using fundus images. First, a vessel enhanced image is generated by tophat reconstruction of the negative green plane image. An initial estimate of the segmented vasculature is extracted by global thresholding the vessel enhanced image. Next, new vessel pixels are identified iteratively by adaptive thresholding of the residual image generated by masking out the existing segmented vessel estimate from the vessel enhanced image. The new vessel pixels are, then, region grown into the existing vessel, thereby resulting in an iterative enhancement of the segmented vessel structure. As the iterations progress, the number of false edge pixels identified as new vessel pixels increases compared to the number of actual vessel pixels. A key contribution of this paper is a novel stopping criterion that terminates the iterative process leading to higher vessel segmentation accuracy. This iterative algorithm is robust to the rate of new vessel pixel addition since it achieves 93.2-95.35% vessel segmentation accuracy with 0.9577-0.9638 area under ROC curve (AUC) on abnormal retinal images from the STARE dataset. The proposed algorithm is computationally efficient and consistent in vessel segmentation performance for retinal images with variations due to pathology, uneven illumination, pigmentation, and fields of view since it achieves a vessel segmentation accuracy of about 95% in an average time of 2.45, 3.95, and 8 s on images from three public datasets DRIVE, STARE, and CHASE_DB1, respectively. Additionally, the proposed algorithm has more than 90% segmentation accuracy for segmenting peripapillary blood vessels in the images from the DRIVE and CHASE_DB1 datasets. PMID:25700436

  16. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  17. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  18. Poster — Thur Eve — 70: Automatic lung bronchial and vessel bifurcations detection algorithm for deformable image registration assessment

    SciTech Connect

    Labine, Alexandre; Carrier, Jean-François; Bedwani, Stéphane; Chav, Ramnada; De Guise, Jacques

    2014-08-15

    Purpose: To investigate an automatic bronchial and vessel bifurcations detection algorithm for deformable image registration (DIR) assessment to improve lung cancer radiation treatment. Methods: 4DCT datasets were acquired and exported to Varian treatment planning system (TPS) EclipseTM for contouring. The lungs TPS contour was used as the prior shape for a segmentation algorithm based on hierarchical surface deformation that identifies the deformed lungs volumes of the 10 breathing phases. Hounsfield unit (HU) threshold filter was applied within the segmented lung volumes to identify blood vessels and airways. Segmented blood vessels and airways were skeletonised using a hierarchical curve-skeleton algorithm based on a generalized potential field approach. A graph representation of the computed skeleton was generated to assign one of three labels to each node: the termination node, the continuation node or the branching node. Results: 320 ± 51 bifurcations were detected in the right lung of a patient for the 10 breathing phases. The bifurcations were visually analyzed. 92 ± 10 bifurcations were found in the upper half of the lung and 228 ± 45 bifurcations were found in the lower half of the lung. Discrepancies between ten vessel trees were mainly ascribed to large deformation and in regions where the HU varies. Conclusions: We established an automatic method for DIR assessment using the morphological information of the patient anatomy. This approach allows a description of the lung's internal structure movement, which is needed to validate the DIR deformation fields for accurate 4D cancer treatment planning.

  19. Automatic segmentation of phase-correlated CT scans through nonrigid image registration using geometrically regularized free-form deformation

    SciTech Connect

    Shekhar, Raj; Lei, Peng; Castro-Pareja, Carlos R.; Plishker, William L.; D'Souza, Warren D.

    2007-07-15

    Conventional radiotherapy is planned using free-breathing computed tomography (CT), ignoring the motion and deformation of the anatomy from respiration. New breath-hold-synchronized, gated, and four-dimensional (4D) CT acquisition strategies are enabling radiotherapy planning utilizing a set of CT scans belonging to different phases of the breathing cycle. Such 4D treatment planning relies on the availability of tumor and organ contours in all phases. The current practice of manual segmentation is impractical for 4D CT, because it is time consuming and tedious. A viable solution is registration-based segmentation, through which contours provided by an expert for a particular phase are propagated to all other phases while accounting for phase-to-phase motion and anatomical deformation. Deformable image registration is central to this task, and a free-form deformation-based nonrigid image registration algorithm will be presented. Compared with the original algorithm, this version uses novel, computationally simpler geometric constraints to preserve the topology of the dense control-point grid used to represent free-form deformation and prevent tissue fold-over. Using mean squared difference as an image similarity criterion, the inhale phase is registered to the exhale phase of lung CT scans of five patients and of characteristically low-contrast abdominal CT scans of four patients. In addition, using expert contours for the inhale phase, the corresponding contours were automatically generated for the exhale phase. The accuracy of the segmentation (and hence deformable image registration) was judged by comparing automatically segmented contours with expert contours traced directly in the exhale phase scan using three metrics: volume overlap index, root mean square distance, and Hausdorff distance. The accuracy of the segmentation (in terms of radial distance mismatch) was approximately 2 mm in the thorax and 3 mm in the abdomen, which compares favorably to the

  20. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  1. Semisupervised synthetic aperture radar image segmentation with multilayer superpixels

    NASA Astrophysics Data System (ADS)

    Wang, Can; Su, Weimin; Gu, Hong; Gong, Dachen

    2015-01-01

    Image segmentation plays a significant role in synthetic aperture radar (SAR) image processing. However, SAR image segmentation is challenging due to speckle. We propose a semisupervised bipartite graph method for segmentation of an SAR image. First, the multilayer over-segmentation of the SAR image, referred to as superpixels, is computed using existing segmentation algorithms. Second, an unbalanced bipartite graph is constructed in which the correlation between pixels is replaced by the texture similarity between superpixels, to reduce the dimension of the edge matrix. To also improve efficiency, we define a new method, called the combination of the Manhattan distance and symmetric Kullback-Leibler divergence, to measure texture similarity. Third, by the Moore-Penrose inverse matrix and semisupervised learning, we construct an across-affinity matrix. A quantitative evaluation using SAR images shows that the new algorithm produces significantly high-quality segmentations as compared with state-of-the-art segmentation algorithms.

  2. Volumetric Semantic Segmentation using Pyramid Context Features

    PubMed Central

    Barron, Jonathan T.; Arbeláez, Pablo; Keränen, Soile V. E.; Biggin, Mark D.; Knowles, David W.; Malik, Jitendra

    2015-01-01

    We present an algorithm for the per-voxel semantic segmentation of a three-dimensional volume. At the core of our algorithm is a novel “pyramid context” feature, a descriptive representation designed such that exact per-voxel linear classification can be made extremely efficient. This feature not only allows for efficient semantic segmentation but enables other aspects of our algorithm, such as novel learned features and a stacked architecture that can reason about self-consistency. We demonstrate our technique on 3D fluorescence microscopy data of Drosophila embryos for which we are able to produce extremely accurate semantic segmentations in a matter of minutes, and for which other algorithms fail due to the size and high-dimensionality of the data, or due to the difficulty of the task. PMID:26029008

  3. Remediation Trends in an Undergraduate Anatomy Course and Assessment of an Anatomy Supplemental Study Skills Course

    ERIC Educational Resources Information Center

    Schutte, Audra Faye

    2013-01-01

    Anatomy A215: Basic Human Anatomy (Anat A215) is an undergraduate human anatomy course at Indiana University Bloomington (IUB) that serves as a requirement for many degree programs at IUB. The difficulty of the course, coupled with pressure to achieve grades for admittance into specific programs, has resulted in high remediation rates. In an…

  4. CDIS: Circle Density Based Iris Segmentation

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Kumari, Anita; Kundu, Boris; Agarwal, Isha

    Biometrics is an automated approach of measuring and analysing physical and behavioural characteristics for identity verification. The stability of the Iris texture makes it a robust biometric tool for security and authentication purposes. Reliable Segmentation of Iris is a necessary precondition as an error at this stage will propagate into later stages and requires proper segmentation of non-ideal images having noises like eyelashes, etc. Iris Segmentation work has been done earlier but we feel it lacks in detecting iris in low contrast images, removal of specular reflections, eyelids and eyelashes. Hence, it motivates us to enhance the said parameters. Thus, we advocate a new approach CDIS for Iris segmentation along with new algorithms for removal of eyelashes, eyelids and specular reflections and pupil segmentation. The results obtained have been presented using GAR vs. FAR graphs at the end and have been compared with prior works related to segmentation of iris.

  5. Digital tomosynthesis: technique modifications and clinical applications for neurovascular anatomy

    SciTech Connect

    Maravilla, K.R.; Murry, R.C. Jr.; Diehl, J.; Suss, R.; Allen, L.; Chang, K.; Crawford, J.; McCoy, R.

    1984-09-01

    Digital tomosynthesis studies (DTS) using a linear tomographic motion can provide good quality clinical images when combined with subtraction angiotomography. By modifying their hardware system and the computer software algorithms, the authors were able to reconstruct tomosynthesis images using an isocentric rotation (IR) motion. Applying a combination of linear tomographic and IR techniques in clinical cases, they performed DTS studies in six patients, five with aneurysms and one with a hypervascular tumor. The results showed detailed definitions of the pathologic entities and the regional neurovascular anatomy. Based on this early experience, DTS would seem to be a useful technique for the preoperative surgical planning of vascular abnormalities.

  6. Polyp Segmentation in NBI Colonoscopy

    NASA Astrophysics Data System (ADS)

    Gross, Sebastian; Kennel, Manuel; Stehle, Thomas; Wulff, Jonas; Tischendorf, Jens; Trautwein, Christian; Aach, Til

    Endoscopic screening of the colon (colonoscopy) is performed to prevent cancer and to support therapy. During intervention colon polyps are located, inspected and, if need be, removed by the investigator. We propose a segmentation algorithm as a part of an automatic polyp classification system for colonoscopic Narrow-Band images. Our approach includes multi-scale filtering for noise reduction, suppression of small blood vessels, and enhancement of major edges. Results of the subsequent edge detection are compared to a set of elliptic templates and evaluated. We validated our algorithm on our polyp database with images acquired during routine colonoscopic examinations. The presented results show the reliable segmentation performance of our method and its robustness to image variations.

  7. Dynamic 3D scanning as a markerless method to calculate multi-segment foot kinematics during stance phase: methodology and first application.

    PubMed

    Van den Herrewegen, Inge; Cuppens, Kris; Broeckx, Mario; Barisch-Fritz, Bettina; Vander Sloten, Jos; Leardini, Alberto; Peeraer, Louis

    2014-08-22

    Multi-segmental foot kinematics have been analyzed by means of optical marker-sets or by means of inertial sensors, but never by markerless dynamic 3D scanning (D3DScanning). The use of D3DScans implies a radically different approach for the construction of the multi-segment foot model: the foot anatomy is identified via the surface shape instead of distinct landmark points. We propose a 4-segment foot model consisting of the shank (Sha), calcaneus (Cal), metatarsus (Met) and hallux (Hal). These segments are manually selected on a static scan. To track the segments in the dynamic scan, the segments of the static scan are matched on each frame of the dynamic scan using the iterative closest point (ICP) fitting algorithm. Joint rotations are calculated between Sha-Cal, Cal-Met, and Met-Hal. Due to the lower quality scans at heel strike and toe off, the first and last 10% of the stance phase is excluded. The application of the method to 5 healthy subjects, 6 trials each, shows a good repeatability (intra-subject standard deviations between 1° and 2.5°) for Sha-Cal and Cal-Met joints, and inferior results for the Met-Hal joint (>3°). The repeatability seems to be subject-dependent. For the validation, a qualitative comparison with joint kinematics from a corresponding established marker-based multi-segment foot model is made. This shows very consistent patterns of rotation. The ease of subject preparation and also the effective and easy to interpret visual output, make the present technique very attractive for functional analysis of the foot, enhancing usability in clinical practice. PMID:24998032

  8. Morphology and Functional Anatomy of the Recurrent Laryngeal Nerve with Extralaryngeal Terminal Bifurcation

    PubMed Central

    Dogan, Sami

    2016-01-01

    Anatomical variations of the recurrent laryngeal nerve (RLN), such as an extralaryngeal terminal bifurcation (ETB), threaten the safety of thyroid surgery. Besides the morphology of the nerve branches, intraoperative evaluation of their functional anatomy may be useful to preserve motor activity. We exposed 67 RLNs in 36 patients. The main trunk, bifurcation point, and terminal branches of bifid nerves were macroscopically determined and exposed during thyroid surgery. The functional anatomy of the nerve branches was evaluated by intraoperative nerve monitoring (IONM). Forty-six RLNs with an ETB were intraoperatively exposed. The bifurcation point was located along the prearterial, arterial, and postarterial segments in 11%, 39%, and 50% of bifid RLNs, respectively. Motor activity was determined in all anterior branches. The functional anatomy of terminal branches detected motor activity in 4 (8.7%) posterior branches of 46 bifid RLNs. The motor activity in posterior branches created a wave amplitude at 25–69% of that in the corresponding anterior branches. The functional anatomy of bifid RLNs demonstrated that anterior branches always contained motor fibres while posterior branches seldom contained motor fibres. The motor activity of the posterior branch was weaker than that of the anterior branch. IONM may help to differentiate between motor and sensory functions of nerve branches. The morphology and functional anatomy of all nerve branches must be preserved to ensure a safer surgery. PMID:27493803

  9. Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy

    NASA Astrophysics Data System (ADS)

    Gil, D.; Garcia-Barnes, J.; Hernández-Sabate, A.; Marti, E.

    2010-03-01

    Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.

  10. Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Li, Yan; Shekhar, Raj; Huang, David

    2002-05-01

    Optical coherence tomography (OCT) provides a non-contact and non-invasive means to visualize the corneal anatomy at micron scale resolution. We obtained corneal images from an arc-scanning (converging) OCT system operating at a wavelength of 830nm and a fan-shaped-scanning high-speed OCT system with an operating wavelength of 1310nm. Different scan protocols (arc/fan) and data acquisition rates, as well as wavelength dependent bio-tissue backscatter contrast and optical absorption, make the images acquired using the two systems different. We developed image-processing algorithms to automatically detect the air-tear interface, epithelium-Bowman's layer interface, laser in-situ keratomileusis (LASIK) flap interface, and the cornea-aqueous interface in both kinds of images. The overall segmentation scheme for 830nm and 1310nm OCT images was similar, although different strategies were adopted for specific processing approaches. Ultrasound pachymetry measurements of the corneal thickness and Placido-ring based corneal topography measurements of the corneal curvature were made on the same day as the OCT examination. Anterior/posterior corneal surface curvature measurement with OCT was also investigated. Results showed that automated segmentation of OCT images could evaluate anatomic outcome of LASIK surgery.

  11. Gross anatomy of network security

    NASA Technical Reports Server (NTRS)

    Siu, Thomas J.

    2002-01-01

    Information security involves many branches of effort, including information assurance, host level security, physical security, and network security. Computer network security methods and implementations are given a top-down description to permit a medically focused audience to anchor this information to their daily practice. The depth of detail of network functionality and security measures, like that of the study of human anatomy, can be highly involved. Presented at the level of major gross anatomical systems, this paper will focus on network backbone implementation and perimeter defenses, then diagnostic tools, and finally the user practices (the human element). Physical security measures, though significant, have been defined as beyond the scope of this presentation.

  12. Iterative contextual CV model for liver segmentation

    NASA Astrophysics Data System (ADS)

    Ji, Hongwei; He, Jiangping; Yang, Xin

    2014-01-01

    In this paper, we propose a novel iterative active contour algorithm, i.e. Iterative Contextual CV Model (ICCV), and apply it to automatic liver segmentation from 3D CT images. ICCV is a learning-based method and can be divided into two stages. At the first stage, i.e. the training stage, given a set of abdominal CT training images and the corresponding manual liver labels, our task is to construct a series of self-correcting classifiers by learning a mapping between automatic segmentations (in each round) and manual reference segmentations via context features. At the second stage, i.e. the segmentation stage, first the basic CV model is used to segment the image and subsequently Contextual CV Model (CCV), which combines the image information and the current shape model, is iteratively performed to improve the segmentation result. The current shape model is obtained by inputting the previous automatic segmentation result into the corresponding self-correcting classifier. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that we obtain more and more accurate segmentation results by the iterative steps and satisfying results are obtained after about six iterations. Also, our method is comparable to the state-of-the-art work on liver segmentation.

  13. Automatic evaluation of uterine cervix segmentations

    NASA Astrophysics Data System (ADS)

    Lotenberg, Shelly; Gordon, Shiri; Long, Rodney; Antani, Sameer; Jeronimo, Jose; Greenspan, Hayit

    2007-03-01

    In this work we focus on the generation of reliable ground truth data for a large medical repository of digital cervicographic images (cervigrams) collected by the National Cancer Institute (NCI). This work is part of an ongoing effort conducted by NCI together with the National Library of Medicine (NLM) at the National Institutes of Health (NIH) to develop a web-based database of the digitized cervix images in order to study the evolution of lesions related to cervical cancer. As part of this effort, NCI has gathered twenty experts to manually segment a set of 933 cervigrams into regions of medical and anatomical interest. This process yields a set of images with multi-expert segmentations. The objectives of the current work are: 1) generate multi-expert ground truth and assess the diffculty of segmenting an image, 2) analyze observer variability in the multi-expert data, and 3) utilize the multi-expert ground truth to evaluate automatic segmentation algorithms. The work is based on STAPLE (Simultaneous Truth and Performance Level Estimation), which is a well known method to generate ground truth segmentation maps from multiple experts' observations. We have analyzed both intra- and inter-expert variability within the segmentation data. We propose novel measures of "segmentation complexity" by which we can automatically identify cervigrams that were found difficult to segment by the experts, based on their inter-observer variability. Finally, the results are used to assess our own automated algorithm for cervix boundary detection.

  14. Anatomy Education Faces Challenges in Pakistan

    ERIC Educational Resources Information Center

    Memon, Ismail K.

    2009-01-01

    Anatomy education in Pakistan is facing many of the same challenges as in other parts of the world. Roughly, a decade ago, all medical and dental colleges in Pakistan emphasized anatomy as a core basic discipline within a traditional medical science curriculum. Now institutions are adopting problem based learning (PBL) teaching philosophies, and…

  15. Design Projects in Human Anatomy & Physiology

    ERIC Educational Resources Information Center

    Polizzotto, Kristin; Ortiz, Mary T.

    2008-01-01

    Very often, some type of writing assignment is required in college entry-level Human Anatomy and Physiology courses. This assignment can be anything from an essay to a research paper on the literature, focusing on a faculty-approved topic of interest to the student. As educators who teach Human Anatomy and Physiology at an urban community college,…

  16. Shark Attack! Sinking Your Teeth into Anatomy.

    ERIC Educational Resources Information Center

    House, Herbert

    2002-01-01

    Presents a real life shark attack story and studies arm reattachment surgery to teach human anatomy. Discusses how knowledge of anatomy can be put to use in the real world and how the arm functions. Includes teaching notes and suggestions for classroom management. (YDS)

  17. Frank Netter's Legacy: Interprofessional Anatomy Instruction

    ERIC Educational Resources Information Center

    Niekrash, Christine E.; Copes, Lynn E.; Gonzalez, Richard A.

    2015-01-01

    Several medical schools have recently described new innovations in interprofessional interactions in gross anatomy courses. The Frank H. Netter MD School of Medicine at Quinnipiac University in Hamden, CT has developed and implemented two contrasting interprofessional experiences in first-year medical student gross anatomy dissection laboratories:…

  18. Volume Segmentation and Ghost Particles

    NASA Astrophysics Data System (ADS)

    Ziskin, Isaac; Adrian, Ronald

    2011-11-01

    Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.

  19. Neurovascular anatomy of the embryonic quail hindlimb.

    PubMed

    Bentley, Matthew T; Poole, Thomas J

    2009-10-01

    Blood vessel and nerve development in the vertebrate embryo possess certain similarities in pattern and molecular guidance cues. To study the specific influence of shared guidance molecules on nervous and vascular development, an understanding of the normal neurovascular anatomy must be in place. The present study documents the pattern of nervous and vascular development in the Japanese quail hindlimb using immunohistochemistry and fluorescently labeled intravital injection combined with confocal and epifluorescent microscopy. The developmental patterns of major nerves and blood vessels of embryonic hindlimbs between stages E2.75 (HH18) and E6.0 (HH29) are described. By E2.75, the dorsal aortae have begun to fuse into a single vessel at the level of the hindlimb, and have completely fused by E3 (HH20). The posterior cardinal vein is formed at the level of the hindlimb by E3, as is the main artery of the early hindlimb, the ischiadic artery, as an offshoot of the dorsal aorta. Our data suggest that eight spinal segments, versus seven as reported by others (Tanaka and Landmesser,1986a; Tyrrell et al.,1990), contribute to innervation of the quail hindlimb. Lumbosacral neurites reach the plexus region by E3.5 (HH21 & 22), pause for approximately 24 hr, and then enter the hindlimb along with the ischiadic and crural arteries through shared foramina in the pelvic anlage. The degree of anterior-posterior spatial congruency between major nerves and blood vessels of the quail hindlimb was found to be highest medial to the pelvic girdle precursor, versus in the hindlimb proper. PMID:19685501

  20. The 2008 Anatomy Ceremony: Essays

    PubMed Central

    Elansary, Mei; Goldberg, Ben; Qian, Ting; Rizzolo, Lawrence J.

    2009-01-01

    When asked to relate my experience of anatomy to the first-year medical and physician associate students at Yale before the start of their own first dissection, I found no better words to share than those of my classmates. Why speak with only one tongue, I said, when you can draw on 99 others? Anatomical dissection elicits what our course director, Lawrence Rizzolo, has called a “diversity of experience,” which, in turn, engenders a diversity of expressions. For Yale medical and physician associate students, this diversity is captured each year in a ceremony dedicated to those who donated their bodies for dissection. The service is an opportunity to offer thanks, but because only students and faculty are in attendance, it is also a place to share and address the complicated tensions that arise while examining, invading, and ultimately disassembling another’s body. It is our pleasure to present selected pieces from the ceremony to the Yale Journal of Biology and Medicine readership. — Peter Gayed, Co-editor-in-chief, Yale Journal of Biology and Medicine and Chair of the 2008 Anatomy Ceremony Planning Committee PMID:19325944

  1. Teaching anatomy: cadavers vs. computers?

    PubMed

    Biasutto, Susana Norma; Caussa, Lucas Ignacio; Criado del Río, Luis Esteban

    2006-03-01

    Our study was aimed to show if cadaver dissections are still important in the Anatomy Course for medical students or whether computerized resources could replace them. We followed three groups, one of them (698 students) proceeded through the Anatomy Course in a traditional way, meaning, with cadaver material enough to observe all the regions and structures; the second group (330 students) used many technological resources but not cadaver dissections; and the third group (145 students) followed the course, recently, with the same program but with both practical resources. Theoretical contents were developed in the same way and by the same professor. The traditional teaching group obtained better results than the technologically supported group, evaluated by the number of students that passed their exams. The third group results were better than the others, with regard to passed exams and marks. Even when computerized improvements have developed a new area giving students a lot of elements to facilitate their approach to imaging structures, the possibility of direct contact with tissues and anatomical elements cannot yet be replaced. We are demonstrating that the best possibility is the correct association of all these resources to complement one another. PMID:16551018

  2. Anatomy of the Dead Sea transform: Does it reflect continuous changes in plate motion?

    USGS Publications Warehouse

    ten Brink, U.S.; Rybakov, M.; Al-Zoubi, A. S.; Hassouneh, M.; Frieslander, U.; Batayneh, A.T.; Goldschmidt, V.; Daoud, M.N.; Rotstein, Y.; Hall, J.K.

    1999-01-01

    A new gravity map of the southern half of the Dead Sea transform offers the first regional view of the anatomy of this plate boundary. Interpreted together with auxiliary seismic and well data, the map reveals a string of subsurface basins of widely varying size, shape, and depth along the plate boundary and relatively short (25-55 km) and discontinuous fault segments. We argue that this structure is a result of continuous small changes in relative plate motion. However, several segments must have ruptured simultaneously to produce the inferred maximum magnitude of historical earthquakes.

  3. The algorithmic anatomy of model-based evaluation

    PubMed Central

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820

  4. Joint model of motion and anatomy for PET image reconstruction

    SciTech Connect

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-12-15

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem.

  5. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  6. Efficient segmentation of skin epidermis in whole slide histopathological images.

    PubMed

    Xu, Hongming; Mandal, Mrinal

    2015-08-01

    Segmentation of epidermis areas is an important step towards automatic analysis of skin histopathological images. This paper presents a robust technique for epidermis segmentation in whole slide skin histopathological images. The proposed technique first performs a coarse epidermis segmentation using global thresholding and shape analysis. The epidermis thickness is then estimated by a series of line segments perpendicular to the main axis of the initially segmented epidermis mask. If the segmented epidermis mask has a thickness greater than a predefined threshold, the segmentation is suspected to be inaccurate. A second pass of fine segmentation using k-means algorithm is then carried out over these coarsely segmented result to enhance the performance. Experimental results on 64 different skin histopathological images show that the proposed technique provides a superior performance compared to the existing techniques. PMID:26737135

  7. Automated 3D vascular segmentation in CT hepatic venography

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Lucidarme, Olivier; Preteux, Francoise

    2005-08-01

    In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

  8. Web-accessible cervigram automatic segmentation tool

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.

  9. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  10. Mass segmentation using a combined method for cancer detection

    PubMed Central

    2011-01-01

    Background Breast cancer is one of the leading causes of cancer death for women all over the world and mammography is thought of as one of the main tools for early detection of breast cancer. In order to detect the breast cancer, computer aided technology has been introduced. In computer aided cancer detection, the detection and segmentation of mass are very important. The shape of mass can be used as one of the factors to determine whether the mass is malignant or benign. However, many of the current methods are semi-automatic. In this paper, we investigate fully automatic segmentation method. Results In this paper, a new mass segmentation algorithm is proposed. In the proposed algorithm, a fully automatic marker-controlled watershed transform is proposed to segment the mass region roughly, and then a level set is used to refine the segmentation. For over-segmentation caused by watershed, we also investigated different noise reduction technologies. Images from DDSM were used in the experiments and the results show that the new algorithm can improve the accuracy of mass segmentation. Conclusions The new algorithm combines the advantages of both methods. The combination of the watershed based segmentation and level set method can improve the efficiency of the segmentation. Besides, the introduction of noise reduction technologies can reduce over-segmentation. PMID:22784625

  11. A method for scale parameter selection and segments refinement for multi-resolution image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Hui; Tang, Yunwei; Liu, Qingjie; Ding, Haifeng; Chen, Yu; Jing, Linhai

    2014-11-01

    Image segmentation is the basis of object-based information extraction from remote sensing imagery. Image segmentation based on multiple features, multi-scale, and spatial context is one current research focus. The scale parameters selected in the segmentation severely impact on the average size of segments obtained by multi-scale segmentation method, such as the Fractal Network Evolution Approach (FNEA) employed in the eCognition software. It is important for the FNEA method to select an appropriate scale parameter that causes no neither over- nor undersegmentation. A method for scale parameter selection and segments refinement is proposed in this paper by modifying a method proposed by Johnson. In a test on two images, the segmentation maps obtained using the proposed method contain less under-segmentation and over-segmentation than that generated by the Johnson's method. It was demonstrated that the proposed method is effective in scale parameter selection and segment refinement for multi-scale segmentation algorithms, such as the FNEA method.

  12. Simultaneous segmentation and statistical label fusion

    NASA Astrophysics Data System (ADS)

    Asman, Andrew J.; Landman, Bennett A.

    2012-02-01

    Labeling or segmentation of structures of interest in medical imaging plays an essential role in both clinical and scientific understanding. Two of the common techniques to obtain these labels are through either fully automated segmentation or through multi-atlas based segmentation and label fusion. Fully automated techniques often result in highly accurate segmentations but lack the robustness to be viable in many cases. On the other hand, label fusion techniques are often extremely robust, but lack the accuracy of automated algorithms for specific classes of problems. Herein, we propose to perform simultaneous automated segmentation and statistical label fusion through the reformulation of a generative model to include a linkage structure that explicitly estimates the complex global relationships between labels and intensities. These relationships are inferred from the atlas labels and intensities and applied to the target using a non-parametric approach. The novelty of this approach lies in the combination of previously exclusive techniques and attempts to combine the accuracy benefits of automated segmentation with the robustness of a multi-atlas based approach. The accuracy benefits of this simultaneous approach are assessed using a multi-label multi-atlas whole-brain segmentation experiment and the segmentation of the highly variable thyroid on computed tomography images. The results demonstrate that this technique has major benefits for certain types of problems and has the potential to provide a paradigm shift in which the lines between statistical label fusion and automated segmentation are dramatically blurred.

  13. A probabilistic level set formulation for interactive organ segmentation

    NASA Astrophysics Data System (ADS)

    Cremers, Daniel; Fluck, Oliver; Rousson, Mikael; Aharon, Shmuel

    2007-03-01

    Level set methods have become increasingly popular as a framework for image segmentation. Yet when used as a generic segmentation tool, they suffer from an important drawback: Current formulations do not allow much user interaction. Upon initialization, boundaries propagate to the final segmentation without the user being able to guide or correct the segmentation. In the present work, we address this limitation by proposing a probabilistic framework for image segmentation which integrates input intensity information and user interaction on equal footings. The resulting algorithm determines the most likely segmentation given the input image and the user input. In order to allow a user interaction in real-time during the segmentation, the algorithm is implemented on a graphics card and in a narrow band formulation.

  14. Towards Automatic Image Segmentation Using Optimised Region Growing Technique

    NASA Astrophysics Data System (ADS)

    Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi

    Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.

  15. Automatic image segmentation by dynamic region merging.

    PubMed

    Peng, Bo; Zhang, Lei; Zhang, David

    2011-12-01

    This paper addresses the automatic image segmentation problem in a region merging style. With an initially oversegmented image, in which many regions (or superpixels) with homogeneous color are detected, an image segmentation is performed by iteratively merging the regions according to a statistical test. There are two essential issues in a region-merging algorithm: order of merging and the stopping criterion. In the proposed algorithm, these two issues are solved by a novel predicate, which is defined by the sequential probability ratio test and the minimal cost criterion. Starting from an oversegmented image, neighboring regions are progressively merged if there is an evidence for merging according to this predicate. We show that the merging order follows the principle of dynamic programming. This formulates the image segmentation as an inference problem, where the final segmentation is established based on the observed image. We also prove that the produced segmentation satisfies certain global properties. In addition, a faster algorithm is developed to accelerate the region-merging process, which maintains a nearest neighbor graph in each iteration. Experiments on real natural images are conducted to demonstrate the performance of the proposed dynamic region-merging algorithm. PMID:21609885

  16. Improving image segmentation by learning region affinities

    SciTech Connect

    Prasad, Lakshman; Yang, Xingwei; Latecki, Longin J

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  17. Frank Netter's legacy: Interprofessional anatomy instruction.

    PubMed

    Niekrash, Christine E; Copes, Lynn E; Gonzalez, Richard A

    2015-01-01

    Several medical schools have recently described new innovations in interprofessional interactions in gross anatomy courses. The Frank H. Netter MD School of Medicine at Quinnipiac University in Hamden, CT has developed and implemented two contrasting interprofessional experiences in first-year medical student gross anatomy dissection laboratories: long-term, informal visits by pathologists' assistant students who work with the medical students to identify potential donor pathologies, and a short-term, formal visit by fourth-year dental students who teach craniofacial anatomy during the oral cavity dissection laboratory. A survey of attitudes of participants was analyzed and suggest the interprofessional experiences were mutually beneficial for all involved, and indicate that implementing multiple, contrasting interprofessional interactions with different goals within a single course is feasible. Two multiple regression analyses were conducted to analyze the data. The first analysis examined attitudes of medical students towards a pathologists' assistant role in a health care team. The question addressing a pathologists' assistant involvement in the anatomy laboratory was most significant. The second analysis examined attitudes of medical students towards the importance of a good foundation in craniofacial anatomy for clinical practice. This perceived importance is influenced by the presence of dental students in the anatomy laboratory. In both instances, the peer interprofessional interactions in the anatomy laboratory resulted in an overall positive attitude of medical students towards pathologists' assistant and dental students. The consequences of these interactions led to better understanding, appreciation and respect of the different professionals that contribute to a health care team. PMID:26014811

  18. How useful is plastination in learning anatomy?

    PubMed

    Latorre, Rafael M; García-Sanz, Mari P; Moreno, Matilde; Hernández, Fuensanta; Gil, Francisco; López, Octavio; Ayala, Maria D; Ramírez, Gregorio; Vázquez, Jose M; Arencibia, Alberto; Henry, Robert W

    2007-01-01

    In recent years plastination has begun to revolutionize the way in which human and veterinary gross anatomy can be presented to students. The study reported here assessed the efficacy of plastinated organs as teaching resources in an innovative anatomy teaching/learning system. The main objective was to evaluate whether the use of plastinated organs improves the quality of teaching and learning of anatomy. For this purpose, we used an interdepartmental approach involving the departments of Veterinary Anatomy, Human Anatomy, Veterinary Surgery, and Education Development and Research Methods. The knowledge base of control and experimental student groups was examined before and after use of the fixed or plastinated resources, respectively, to gather information evaluating the effectiveness of these teaching resources. Significant differences (p < 0.001) between control and experimental groups of Human and Veterinary Anatomy were observed in the post-test results. The Veterinary Surgery students had the most positive opinion of the use of plastinated specimens. Using these data, we were able to quantitatively characterize the use of plastinated specimens as anatomy teaching resources. This analysis showed that all the plastinated resources available were heavily used and deemed useful by students. Although the properties of plastinated specimens accommodate student needs at various levels, traditional material should be used in conjunction with plastinated resources. PMID:17446645

  19. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  20. Evolutions equations in computational anatomy.

    PubMed

    Younes, Laurent; Arrate, Felipe; Miller, Michael I

    2009-03-01

    One of the main purposes in computational anatomy is the measurement and statistical study of anatomical variations in organs, notably in the brain or the heart. Over the last decade, our group has progressively developed several approaches for this problem, all related to the Riemannian geometry of groups of diffeomorphisms and the shape spaces on which these groups act. Several important shape evolution equations that are now used routinely in applications have emerged over time. Our goal in this paper is to provide an overview of these equations, placing them in their theoretical context, and giving examples of applications in which they can be used. We introduce the required theoretical background before discussing several classes of equations of increasingly complexity. These equations include energy minimizing evolutions deriving from Riemannian gradient descent, geodesics, parallel transport and Jacobi fields. PMID:19059343

  1. Art, antiquarianism and early anatomy.

    PubMed

    Guest, Clare E L

    2014-12-01

    Discussions of the early relationship between art and anatomy are shaped by Vasari's account of Florentine artists who dissected bodies in order to understand the causes of movement, and the end of movement in action. This account eclipses the role of the study of antiquities in Renaissance anatomical illustration. Beyond techniques of presentation, such as sectioning and analytic illustration, or a preoccupation with the mutilated fragment, antiquarianism offered a reflection on the variant and the role of temperament which could be adapted for anatomical purposes. With its play on ambiguities of life and death, idealisation and damage, antiquarianism also provided a way of negotiating the difficulties of content inherent in anatomical illustration. As such, it goes beyond exclusively historical interest to provoke reflection on the modes, possibilities and humane responsibilities of medical illustration. PMID:24696510

  2. Segmentation of knee injury swelling on infrared images

    NASA Astrophysics Data System (ADS)

    Puentes, John; Langet, Hélène; Herry, Christophe; Frize, Monique

    2011-03-01

    Interpretation of medical infrared images is complex due to thermal noise, absence of texture, and small temperature differences in pathological zones. Acute inflammatory response is a characteristic symptom of some knee injuries like anterior cruciate ligament sprains, muscle or tendons strains, and meniscus tear. Whereas artificial coloring of the original grey level images may allow to visually assess the extent inflammation in the area, their automated segmentation remains a challenging problem. This paper presents a hybrid segmentation algorithm to evaluate the extent of inflammation after knee injury, in terms of temperature variations and surface shape. It is based on the intersection of rapid color segmentation and homogeneous region segmentation, to which a Laplacian of a Gaussian filter is applied. While rapid color segmentation enables to properly detect the observed core of swollen area, homogeneous region segmentation identifies possible inflammation zones, combining homogeneous grey level and hue area segmentation. The hybrid segmentation algorithm compares the potential inflammation regions partially detected by each method to identify overlapping areas. Noise filtering and edge segmentation are then applied to common zones in order to segment the swelling surfaces of the injury. Experimental results on images of a patient with anterior cruciate ligament sprain show the improved performance of the hybrid algorithm with respect to its separated components. The main contribution of this work is a meaningful automatic segmentation of abnormal skin temperature variations on infrared thermography images of knee injury swelling.

  3. Bayesian Fusion of Color and Texture Segmentations

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto

    2000-01-01

    In many applications one would like to use information from both color and texture features in order to segment an image. We propose a novel technique to combine "soft" segmentations computed for two or more features independently. Our algorithm merges models according to a mean entropy criterion, and allows to choose the appropriate number of classes for the final grouping. This technique also allows to improve the quality of supervised classification based on one feature (e.g. color) by merging information from unsupervised segmentation based on another feature (e.g., texture.)

  4. Segmentation Of Multifrequency, Multilook SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J.; Kwok, Ronald; Chellappa, Rama

    1993-01-01

    Segmentation of multifrequency, multilook synthetic-aperture radar (SAR) image intensity data into regions, within each of which backscattering characteristics of target scene considered homogeneous, enhanced by use of two statistical models. One represents statistics of multifrequency, multilook speckled intensities of SAR picture elements; other represents statistics of labels applied to regions into which picture elements grouped. Each region represents different type of terrain, terrain cover, or other surface; e.g., forest, agricultural land, sea ice, or water. Segmentation of image into regions of neighboring picture elements accomplished by method similar to that described in "Algorithms For Segmentation Of Complex-Amplitude SAR Data" (NPO-18524).

  5. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  6. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  7. Active contour based segmentation of resected livers in CT images

    NASA Astrophysics Data System (ADS)

    Oelmann, Simon; Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2015-03-01

    The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.

  8. Image segmentation in wavelet transform space implemented on DSP

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Castillejos, Heydy; Peralta-Fabi, Ricardo

    2012-06-01

    A novel approach in the segmentation for the images of different nature employing the feature extraction in WT space before the segmentation process is presented. The designed frameworks (W-FCM, W-CPSFCM and WK-Means) according to AUC analysis have demonstrated better performance novel frameworks against other algorithms existing in literature during numerous simulation experiments with synthetic and dermoscopic images. The novel W-CPSFCM algorithm estimates a number of clusters in automatic mode without the intervention of a specialist. The implementation of the proposed segmentation algorithms on the Texas Instruments DSP TMS320DM642 demonstrates possible real time processing mode for images of different nature.

  9. Three-dimensional segmentation of the heart muscle using image statistics

    NASA Astrophysics Data System (ADS)

    Nillesen, Maartje M.; Lopata, Richard G. P.; Gerrits, Inge H.; Kapusta, Livia; Huisman, Henkjan H.; Thijssen, Johan M.; de Korte, Chris L.

    2006-03-01

    Segmentation of the heart muscle in 3D echocardiographic images provides a tool for visualization of cardiac anatomy and assessment of heart function, and serves as an important pre-processing step for cardiac strain imaging. By incorporating spatial and temporal information of 3D ultrasound image sequences (4D), a fully automated method using image statistics was developed to perform 3D segmentation of the heart muscle. 3D rf-data were acquired with a Philips SONOS 7500 live 3D ultrasound system, and an X4 matrix array transducer (2-4 MHz). Left ventricular images of five healthy children were taken in transthoracial short/long axis view. As a first step, image statistics of blood and heart muscle were investigated. Next, based on these statistics, an adaptive mean squares filter was selected and applied to the images. Window size was related to speckle size (5x2 speckles). The degree of adaptive filtering was automatically steered by the local homogeneity of tissue. As a result, discrimination of heart muscle and blood was optimized, while sharpness of edges was preserved. After this pre-processing stage, homomorphic filtering and automatic thresholding were performed to obtain the inner borders of the heart muscle. Finally, a deformable contour algorithm was used to yield a closed contour of the left ventricular cavity in each elevational plane. Each contour was optimized using contours of the surrounding planes (spatial and temporal) as limiting condition to ensure spatial and temporal continuity. Better segmentation of the ventricle was obtained using 4D information than using information of each plane separately.

  10. Segment alignment control system

    NASA Technical Reports Server (NTRS)

    Aubrun, JEAN-N.; Lorell, Ken R.

    1988-01-01

    The segmented primary mirror for the LDR will require a special segment alignment control system to precisely control the orientation of each of the segments so that the resulting composite reflector behaves like a monolith. The W.M. Keck Ten Meter Telescope will utilize a primary mirror made up of 36 actively controlled segments. Thus the primary mirror and its segment alignment control system are directly analogous to the LDR. The problems of controlling the segments in the face of disturbances and control/structures interaction, as analyzed for the TMT, are virtually identical to those for the LDR. The two systems are briefly compared.

  11. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems. PMID:21088317

  12. CPR Instruction in a Human Anatomy Class.

    ERIC Educational Resources Information Center

    Lutton, Lewis M.

    1978-01-01

    Describes how cardiopulmonary resuscitation (CPR) instruction can be included in a college anatomy and physiology course. Equipment and instructors are provided locally by the Red Cross or American Heart Association. (MA)

  13. Anatomy Ontology Matching Using Markov Logic Networks

    PubMed Central

    Li, Chunhua; Zhao, Pengpeng; Wu, Jian; Cui, Zhiming

    2016-01-01

    The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment. PMID:27382498

  14. The anatomy and pathophysiology of the wrist.

    PubMed

    Hooper, Geoffrey

    2006-04-28

    A basic knowledge of the anatomy and the interrelationships of the structures that make up the joint is a prerequisite for understanding the pathomechanics of the wrist. In the paper, the anatomy (especially including carpal ligaments) and the mechanics of wrist movements, also under load, are described. The features of the common wrist disorders that occur as a result of injury are also explained. PMID:17603435

  15. Computed tomography of the calcaneus: normal anatomy

    SciTech Connect

    Heger, L.; Wulff, K.

    1985-07-01

    The normal sectional anatomy of the calcaneus was studied as the background for interpretation of computed tomography (CT) of fractures. Multiplanar CT examination of the normal calcaneus was obtained, and sections were matched with a simplified anatomic model. Sectional anatomy in the four most important planes is described. This facilitates three-dimensional understanding of the calcaneus from sections and interpretation of CT sections obtained in any atypical plane.

  16. Hyperspectral image segmentation using active contours

    NASA Astrophysics Data System (ADS)

    Lee, Cheolha P.; Snyder, Wesley E.

    2004-08-01

    Multispectral or hyperspectral image processing has been studied as a possible approach to automatic target recognition (ATR). Hundreds of spectral bands may provide high data redundancy, compensating the low contrast in medium wavelength infrared (MWIR) and long wavelength infrared (LWIR) images. Thus, the combination of spectral (image intensity) and spatial (geometric feature) information analysis could produce a substantial improvement. Active contours provide segments with continuous boundaries, while edge detectors based on local filtering often provide discontinuous boundaries. The segmentation by active contours depends on geometric feature of the object as well as image intensity. However, the application of active contours to multispectral images has been limited to the cases of simply textured images with low number of frames. This paper presents a supervised active contour model, which is applicable to vector-valued images with non-homogeneous regions and high number of frames. In the training stage, histogram models of target classes are estimated from sample vector-pixels. In the test stage, contours are evolved based on two different metrics: the histogram models of the corresponding segments and the histogram models estimated from sample target vector-pixels. The proposed segmentation method integrates segmentation and model-based pattern matching using supervised segmentation and multi-phase active contour model, while traditional methods apply pattern matching only after the segmentation. The proposed algorithm is implemented with both synthetic and real multispectral images, and shows desirable segmentation and classification results even in images with non-homogeneous regions.

  17. A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics

    PubMed Central

    Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.

    2009-01-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007

  18. Roadmap for efficient parallelization of breast anatomy simulation

    NASA Astrophysics Data System (ADS)

    Chui, Joseph H.; Pokrajac, David D.; Maidment, Andrew D. A.; Bakic, Predrag R.

    2012-03-01

    A roadmap has been proposed to optimize the simulation of breast anatomy by parallel implementation, in order to reduce the time needed to generate software breast phantoms. The rapid generation of high resolution phantoms is needed to support virtual clinical trials of breast imaging systems. We have recently developed an octree-based recursive partitioning algorithm for breast anatomy simulation. The algorithm has good asymptotic complexity; however, its current MATLAB implementation cannot provide optimal execution times. The proposed roadmap for efficient parallelization includes the following steps: (i) migrate the current code to a C/C++ platform and optimize it for single-threaded implementation; (ii) modify the code to allow for multi-threaded CPU implementation; (iii) identify and migrate the code to a platform designed for multithreaded GPU implementation. In this paper, we describe our results in optimizing the C/C++ code for single-threaded and multi-threaded CPU implementations. As the first step of the proposed roadmap we have identified a bottleneck component in the MATLAB implementation using MATLAB's profiling tool, and created a single threaded CPU implementation of the algorithm using C/C++'s overloaded operators and standard template library. The C/C++ implementation has been compared to the MATLAB version in terms of accuracy and simulation time. A 520-fold reduction of the execution time was observed in a test of phantoms with 50- 400 μm voxels. In addition, we have identified several places in the code which will be modified to allow for the next roadmap milestone of the multithreaded CPU implementation.

  19. Medical student participation in surface anatomy classes.

    PubMed

    Aggarwal, R; Brough, H; Ellis, H

    2006-10-01

    Surface anatomy is an integral part of medical education and enables medical students to learn skills for future medical practice. In the past decade, there has been a decline in the teaching of anatomy in the medical curriculum, and this study seeks to assess the attitudes of medical students to participation in surface anatomy classes. Consequently, all first year medical students at the Guy's, King's and St Thomas's Medical School, London, were asked to fill in an anonymous questionnaire at the end of their last surface anatomy session of the year. A total of 290 medical students completed the questionnaires, resulting in an 81.6% response rate. The students had a mean age of 19.6 years (range 18-32) and 104 (35.9%) of them were male. Seventy-six students (26.2%) were subjects in surface anatomy tutorials (60.5% male). Students generally volunteered because no one else did. Of the volunteers, 38.2% would rather not have been subjects, because of embarrassment, inability to make notes, or to see clearly the material being taught. Female medical students from ethnic minority groups were especially reluctant to volunteer to be subjects. Single-sex classes improved the volunteer rate to some extent, but not dramatically. Students appreciate the importance of surface anatomy to cadaveric study and to future clinical practice. Computer models, lectures, and videos are complementary but cannot be a substitute for peer group models, artists' models being the only alternative. PMID:16302232

  20. Sipunculans and segmentation

    PubMed Central

    Kristof, Alen; Brinkmann, Nora

    2009-01-01

    Comparative molecular, developmental and morphogenetic analyses show that the three major segmented animal groups—Lophotrochozoa, Ecdysozoa and Vertebrata—use a wide range of ontogenetic pathways to establish metameric body organization. Even in the life history of a single specimen, different mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral nervous system form in anterior-posterior progression. Contrary to traditional belief, loss of segmentation may have occurred more often than commonly assumed, as exemplified in the sipunculans, which show remnants of segmentation in larval stages but are unsegmented as adults. The developmental plasticity and potential evolutionary lability of segmentation nourishes the controversy of a segmented bilaterian ancestor versus multiple independent evolution of segmentation in respective metazoan lineages. PMID:19513266

  1. Segmented trapped vortex cavity

    NASA Technical Reports Server (NTRS)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  2. [Bilateral segmental neurofibromatosis].

    PubMed

    Rose, I; Vakilzadeh, F

    1991-12-01

    Segmental neurofibromatosis is a rare type of neurofibromatosis. We report a case of bilateral manifestation, review the literature on this extremely uncommon variant, and discuss the possible causative mechanisms and the genetic risk of segmental neurofibromatosis. PMID:1765491

  3. Station Tour: Russian Segment

    NASA Video Gallery

    Expedition 33 Commander Suni Williams concludes her tour of the International Space Station with a visit to the Russian segment, which includes Zarya, the first segment of the station launched in 1...

  4. Tribute to a triad: history of splenic anatomy, physiology, and surgery--part 1.

    PubMed

    McClusky, D A; Skandalakis, L J; Colborn, G L; Skandalakis, J E

    1999-03-01

    The spleen is an enigmatic organ with a peculiar anatomy and physiology. Though our understanding of this organ has improved vastly over the years, the spleen continues to produce problems for the surgeon, the hematologist, and the patient. The history of the spleen is full of fables and myths, but it is also full of realities. In the Talmud, the Midrash, and the writings of Hippocrates, Plato, Aristotle, Galen, and several other giants of the past, one can find a lot of Delphian and Byzantine ambiguities. At that time, splenectomy was the art of surgery for many splenic diseases. From antiquity to the Renaissance, efforts were made to study the structure, functions, and anatomy of the spleen. Vesalius questioned Galen; and Malpighi, the founder of microscopic anatomy, gave a sound account of the histology and the physiologic destiny of the spleen. Surgical inquiry gradually became a focal point, yet it was still not clear what purpose the spleen served. It has been within the past 50 years that the most significant advances in the knowledge of the spleen and splenic surgery have been made. The work of Campos Christo in 1962 about the segmental anatomy of the spleen helped surgeons perform a partial splenectomy, thereby avoiding complications of postsplenectomy infection. With the recent successes of laparoscopic splenectomy in selected cases, the future of splenic surgery will undoubtedly bring many more changes. PMID:9933705

  5. The 2008 anatomy ceremony: essays.

    PubMed

    Elansary, Mei; Goldberg, Ben; Qian, Ting; Rizzolo, Lawrence J

    2009-03-01

    When asked to relate my experience of anatomy to the first-year medical and physician associate students at Yale before the start of their own first dissection, I found no better words to share than those of my classmates. Why speak with only one tongue, I said, when you can draw on 99 others? Anatomical dissection elicits what our course director, Lawrence Rizzolo, has called a "diversity of experience," which, in turn, engenders a diversity of expressions. For Yale medical and physician associate students, this diversity is captured each year in a ceremony dedicated to those who donated their bodies for dissection. The service is an opportunity to offer thanks, but because only students and faculty are in attendance, it is also a place to share and address the complicated tensions that arise while examining, invading, and ultimately disassembling another's body. It is our pleasure to present selected pieces from the ceremony to the Yale Journal of Biology and Medicine readership. PMID:19325944

  6. Brain anatomy in Diplura (Hexapoda)

    PubMed Central

    2012-01-01

    Background In the past decade neuroanatomy has proved to be a valuable source of character systems that provide insights into arthropod relationships. Since the most detailed description of dipluran brain anatomy dates back to Hanström (1940) we re-investigated the brains of Campodea augens and Catajapyx aquilonaris with modern neuroanatomical techniques. The analyses are based on antibody staining and 3D reconstruction of the major neuropils and tracts from semi-thin section series. Results Remarkable features of the investigated dipluran brains are a large central body, which is organized in nine columns and three layers, and well developed mushroom bodies with calyces receiving input from spheroidal olfactory glomeruli in the deutocerebrum. Antibody staining against a catalytic subunit of protein kinase A (DC0) was used to further characterize the mushroom bodies. The japygid Catajapyx aquilonaris possesses mushroom bodies which are connected across the midline, a unique condition within hexapods. Conclusions Mushroom body and central body structure shows a high correspondence between japygids and campodeids. Some unique features indicate that neuroanatomy further supports the monophyly of Diplura. In a broader phylogenetic context, however, the polarization of brain characters becomes ambiguous. The mushroom bodies and the central body of Diplura in several aspects resemble those of Dicondylia, suggesting homology. In contrast, Archaeognatha completely lack mushroom bodies and exhibit a central body organization reminiscent of certain malacostracan crustaceans. Several hypotheses of brain evolution at the base of the hexapod tree are discussed. PMID:23050723

  7. Molecular Anatomy of Palate Development

    PubMed Central

    Potter, Andrew S.; Potter, S. Steven

    2015-01-01

    The NIH FACEBASE consortium was established in part to create a central resource for craniofacial researchers. One purpose is to provide a molecular anatomy of craniofacial development. To this end we have used a combination of laser capture microdissection and RNA-Seq to define the gene expression programs driving development of the murine palate. We focused on the E14.5 palate, soon after medial fusion of the two palatal shelves. The palate was divided into multiple compartments, including both medial and lateral, as well as oral and nasal, for both the anterior and posterior domains. A total of 25 RNA-Seq datasets were generated. The results provide a comprehensive view of the region specific expression of all transcription factors, growth factors and receptors. Paracrine interactions can be inferred from flanking compartment growth factor/receptor expression patterns. The results are validated primarily through very high concordance with extensive previously published gene expression data for the developing palate. In addition selected immunostain validations were carried out. In conclusion, this report provides an RNA-Seq based atlas of gene expression patterns driving palate development at microanatomic resolution. This FACEBASE resource is designed to promote discovery by the craniofacial research community. PMID:26168040

  8. Possible and Impossible Segments.

    ERIC Educational Resources Information Center

    Walker, Rachel; Pullum, Geoffrey K.

    1999-01-01

    Examines the relationship between phonetic possibility and phonological permissibility of segment types. Specific focus is on whether there are any phonetically impossible segments phonologically permissible, and whether there are any phonetically possible segments phonologically impermissable. Examines the case of nasality spreading in Sudanese…

  9. Building roof segmentation from aerial images using a lineand region-based watershed segmentation technique.

    PubMed

    El Merabet, Youssef; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja

    2015-01-01

    In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706

  10. Crest lines for surface segmentation and flattening.

    PubMed

    Stylianou, Georgios; Farin, Gerald

    2004-01-01

    We present a method for extracting feature curves called crest lines from a triangulated surface. Then, we calculate the geodesic Voronoi diagram of crest lines to segment the surface into several regions. Afterward, barycentric surface flattening using theory from graph embeddings is implemented and, using the Geodesic Voronoi diagram, we develop a faster surface flattening algorithm. PMID:15794136

  11. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    NASA Astrophysics Data System (ADS)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  12. Robustness of Spann-Wilson segmentation on SAR imagery

    NASA Technical Reports Server (NTRS)

    Daida, Jason M.; Vesecky, John F.

    1992-01-01

    The performances of the Spann-Wilson algorithm on simulated synthetic aperture radar (SAR) images with varying degrees of speckle (one to four looks and varying amounts of white noise) is described. One hundred forty-eight test images are considered, of which the algorithm segmented most without any adjustment to the algorithm's parameters. The effect of speckle on fractal boundaries is studied. The effect of varying multiplicative and additive noise distributions for a fixed set of segmentation parameters is examined. The modified Spann-Wilson algorithm on four-look imagery is evaluated.

  13. Multi-segment detector

    NASA Technical Reports Server (NTRS)

    George, Peter K. (Inventor)

    1978-01-01

    A plurality of stretcher detector segments are connected in series whereby detector signals generated when a bubble passes thereby are added together. Each of the stretcher detector segments is disposed an identical propagation distance away from passive replicators wherein bubbles are replicated from a propagation path and applied, simultaneously, to the stretcher detector segments. The stretcher detector segments are arranged to include both dummy and active portions thereof which are arranged to permit the geometry of both the dummy and active portions of the segment to be substantially matched.

  14. Color image segmentation

    NASA Astrophysics Data System (ADS)

    McCrae, Kimberley A.; Ruck, Dennis W.; Rogers, Steven K.; Oxley, Mark E.

    1994-03-01

    The most difficult stage of automated target recognition is segmentation. Current segmentation problems include faces and tactical targets; previous efforts to segment these objects have used intensity and motion cues. This paper develops a color preprocessing scheme to be used with the other segmentation techniques. A neural network is trained to identify the color of a desired object, eliminating all but that color from the scene. Gabor correlations and 2D wavelet transformations will be performed on stationary images; and 3D wavelet transforms on multispectral data will incorporate color and motion detection into the machine visual system. The paper will demonstrate that color and motion cues can enhance a computer segmentation system. Results from segmenting faces both from the AFIT data base and from video taped television are presented; results from tactical targets such as tanks and airplanes are also given. Color preprocessing is shown to greatly improve the segmentation in most cases.

  15. Segmentation of moving object in complex environment

    NASA Astrophysics Data System (ADS)

    Yong, Yang; Wang, Jingru; Zhang, Qiheng

    2005-02-01

    This paper presents a new automatic image segmentation method for segmenting moving object in complex environment by combining the motion information with edge information. We propose an adaptive optical flow method based on the Horn-Schunck algorithm to estimate the optical flow field. Our method puts different smoothness constraints on different directions and optical flow constraint is used according to the gradient magnitude. Canny edge detector can obtain the most edge information but miss some pixels. In order to restore these missing pixels the edge has a growing based on the continuity of optical flow field. Next, by remaining the block that has the longest edge could delete the noise in the background, and then the last segmentation result is obtained. The experimental result demonstrates that this method can segment the moving object in complex environment precisely.

  16. Iterative approach to joint segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Scott, Richard; Ramachandran, Janakiramanan; Liu, Qiuhua; Khan, Faisal; Zeineh, Jack; Donovan, Michael; Fernandez, Gerardo

    2012-02-01

    Accurate segmentation of overlapping nuclei is essential in determining nuclei count and evaluating the sub-cellular localization of protein biomarkers in image Cytometry and Histology. Current cellular segmentation algorithms generally lack fast and reliable methods for disambiguating clumped nuclei. In immuno-fluorescence segmentation, solutions to challenges including nuclei misclassification, irregular boundaries, and under-segmentation require reliable separation of clumped nuclei. This paper presents a fast and accurate algorithm for joint segmentation of cellular cytoplasm and nuclei incorporating procedures for reliably separating overlapping nuclei. The algorithm utilizes a combination of ideas and is a significant improvement on state-of-the-art algorithms for this application. First, an adaptive process that includes top-hat filtering, blob detection and distance transforms estimates the inverse illumination field and corrects for intensity non-uniformity. Minimum-error-thresholding based binarization augmented by statistical stability estimation is applied prior to seed-detection constrained by a distance-map-based scale-selection to identify candidate seeds for nuclei segmentation. The nuclei clustering step also incorporates error estimation based on statistical stability. This enables the algorithm to perform localized error correction. Final steps include artifact removal and reclassification of nuclei objects near the cytoplasm boundary as epithelial or stroma. Evaluation using 48 realistic phantom images with known ground-truth shows overall segmentation accuracy exceeding 96%. It significantly outperformed two state-of-the-art algorithms in clumped nuclei separation. Tests on 926 prostate biopsy images (326 patients) show that the segmentation improvement improves the predictive power of nuclei architecture features based on the minimum spanning tree algorithm. The algorithm has been deployed in a large scale pathology application.

  17. Breast Tissue 3D Segmentation and Visualization on MRI

    PubMed Central

    Cui, Xiangfei; Sun, Feifei

    2013-01-01

    Tissue segmentation and visualization are useful for breast lesion detection and quantitative analysis. In this paper, a 3D segmentation algorithm based on Kernel-based Fuzzy C-Means (KFCM) is proposed to separate the breast MR images into different tissues. Then, an improved volume rendering algorithm based on a new transfer function model is applied to implement 3D breast visualization. Experimental results have been shown visually and have achieved reasonable consistency. PMID:23983676

  18. Segmenting Student Markets with a Student Satisfaction and Priorities Survey.

    ERIC Educational Resources Information Center

    Borden, Victor M. H.

    1995-01-01

    A market segmentation analysis of 872 university students compared 2 hierarchical clustering procedures for deriving market segments: 1 using matching-type measures and an agglomerative clustering algorithm, and 1 using the chi-square based automatic interaction detection. Results and implications for planning, evaluating, and improving academic…

  19. Four-chamber heart modeling and automatic segmentation for 3D cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Georgescu, Bogdan; Barbu, Adrian; Scheuering, Michael; Comaniciu, Dorin

    2008-03-01

    Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This

  20. Fizeau interferometric cophasing of segmented mirrors: experimental validation.

    PubMed

    Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter

    2014-06-01

    We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror. PMID:24921490