Sample records for statistical segmentation method

  1. Validation tools for image segmentation

    NASA Astrophysics Data System (ADS)

    Padfield, Dirk; Ross, James

    2009-02-01

    A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.

  2. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    NASA Astrophysics Data System (ADS)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  3. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.

  4. Statistical image segmentation for the detection of skin lesion borders in UV fluorescence excitation

    NASA Astrophysics Data System (ADS)

    Ortega-Martinez, Antonio; Padilla-Martinez, Juan Pablo; Franco, Walfre

    2016-04-01

    The skin contains several fluorescent molecules or fluorophores that serve as markers of structure, function and composition. UV fluorescence excitation photography is a simple and effective way to image specific intrinsic fluorophores, such as the one ascribed to tryptophan which emits at a wavelength of 345 nm upon excitation at 295 nm, and is a marker of cellular proliferation. Earlier, we built a clinical UV photography system to image cellular proliferation. In some samples, the naturally low intensity of the fluorescence can make it difficult to separate the fluorescence of cells in higher proliferation states from background fluorescence and other imaging artifacts -- like electronic noise. In this work, we describe a statistical image segmentation method to separate the fluorescence of interest. Statistical image segmentation is based on image averaging, background subtraction and pixel statistics. This method allows to better quantify the intensity and surface distributions of fluorescence, which in turn simplify the detection of borders. Using this method we delineated the borders of highly-proliferative skin conditions and diseases, in particular, allergic contact dermatitis, psoriatic lesions and basal cell carcinoma. Segmented images clearly define lesion borders. UV fluorescence excitation photography along with statistical image segmentation may serve as a quick and simple diagnostic tool for clinicians.

  5. A segmentation editing framework based on shape change statistics

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen

    2017-02-01

    Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.

  6. Localized Statistics for DW-MRI Fiber Bundle Segmentation

    PubMed Central

    Lankton, Shawn; Melonakos, John; Malcolm, James; Dambreville, Samuel; Tannenbaum, Allen

    2013-01-01

    We describe a method for segmenting neural fiber bundles in diffusion-weighted magnetic resonance images (DWMRI). As these bundles traverse the brain to connect regions, their local orientation of diffusion changes drastically, hence a constant global model is inaccurate. We propose a method to compute localized statistics on orientation information and use it to drive a variational active contour segmentation that accurately models the non-homogeneous orientation information present along the bundle. Initialized from a single fiber path, the proposed method proceeds to capture the entire bundle. We demonstrate results using the technique to segment the cingulum bundle and describe several extensions making the technique applicable to a wide range of tissues. PMID:23652079

  7. A variational approach to liver segmentation using statistics from multiple sources

    NASA Astrophysics Data System (ADS)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  8. SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY.

    PubMed

    Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang

    2009-08-07

    This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application.

  9. SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY

    PubMed Central

    Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang

    2010-01-01

    This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application. PMID:21197416

  10. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  11. Prostate segmentation in MRI using a convolutional neural network architecture and training strategy based on statistical shape models.

    PubMed

    Karimi, Davood; Samei, Golnoosh; Kesch, Claudia; Nir, Guy; Salcudean, Septimiu E

    2018-05-15

    Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.

  12. Statistical Validation of Automatic Methods for Hippocampus Segmentation in MR Images of Epileptic Patients

    PubMed Central

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad R.; Pompili, Dario; Soltanian-Zadeh, Hamid

    2015-01-01

    Hippocampus segmentation is a key step in the evaluation of mesial Temporal Lobe Epilepsy (mTLE) by MR images. Several automated segmentation methods have been introduced for medical image segmentation. Because of multiple edges, missing boundaries, and shape changing along its longitudinal axis, manual outlining still remains the benchmark for hippocampus segmentation, which however, is impractical for large datasets due to time constraints. In this study, four automatic methods, namely FreeSurfer, Hammer, Automatic Brain Structure Segmentation (ABSS), and LocalInfo segmentation, are evaluated to find the most accurate and applicable method that resembles the bench-mark of hippocampus. Results from these four methods are compared against those obtained using manual segmentation for T1-weighted images of 157 symptomatic mTLE patients. For performance evaluation of automatic segmentation, Dice coefficient, Hausdorff distance, Precision, and Root Mean Square (RMS) distance are extracted and compared. Among these four automated methods, ABSS generates the most accurate results and the reproducibility is more similar to expert manual outlining by statistical validation. By considering p-value<0.05, the results of performance measurement for ABSS reveal that, Dice is 4%, 13%, and 17% higher, Hausdorff is 23%, 87%, and 70% lower, precision is 5%, -5%, and 12% higher, and RMS is 19%, 62%, and 65% lower compared to LocalInfo, FreeSurfer, and Hammer, respectively. PMID:25571043

  13. A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography.

    PubMed

    Timp, Sheila; Karssemeijer, Nico

    2004-05-01

    Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area Az under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in Az values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant.

  14. Interactive semiautomatic contour delineation using statistical conditional random fields framework.

    PubMed

    Hu, Yu-Chi; Grossberg, Michael D; Wu, Abraham; Riaz, Nadeem; Perez, Carmen; Mageras, Gig S

    2012-07-01

    Contouring a normal anatomical structure during radiation treatment planning requires significant time and effort. The authors present a fast and accurate semiautomatic contour delineation method to reduce the time and effort required of expert users. Following an initial segmentation on one CT slice, the user marks the target organ and nontarget pixels with a few simple brush strokes. The algorithm calculates statistics from this information that, in turn, determines the parameters of an energy function containing both boundary and regional components. The method uses a conditional random field graphical model to define the energy function to be minimized for obtaining an estimated optimal segmentation, and a graph partition algorithm to efficiently solve the energy function minimization. Organ boundary statistics are estimated from the segmentation and propagated to subsequent images; regional statistics are estimated from the simple brush strokes that are either propagated or redrawn as needed on subsequent images. This greatly reduces the user input needed and speeds up segmentations. The proposed method can be further accelerated with graph-based interpolation of alternating slices in place of user-guided segmentation. CT images from phantom and patients were used to evaluate this method. The authors determined the sensitivity and specificity of organ segmentations using physician-drawn contours as ground truth, as well as the predicted-to-ground truth surface distances. Finally, three physicians evaluated the contours for subjective acceptability. Interobserver and intraobserver analysis was also performed and Bland-Altman plots were used to evaluate agreement. Liver and kidney segmentations in patient volumetric CT images show that boundary samples provided on a single CT slice can be reused through the entire 3D stack of images to obtain accurate segmentation. In liver, our method has better sensitivity and specificity (0.925 and 0.995) than region growing (0.897 and 0.995) and level set methods (0.912 and 0.985) as well as shorter mean predicted-to-ground truth distance (2.13 mm) compared to regional growing (4.58 mm) and level set methods (8.55 mm and 4.74 mm). Similar results are observed in kidney segmentation. Physician evaluation of ten liver cases showed that 83% of contours did not need any modification, while 6% of contours needed modifications as assessed by two or more evaluators. In interobserver and intraobserver analysis, Bland-Altman plots showed our method to have better repeatability than the manual method while the delineation time was 15% faster on average. Our method achieves high accuracy in liver and kidney segmentation and considerably reduces the time and labor required for contour delineation. Since it extracts purely statistical information from the samples interactively specified by expert users, the method avoids heuristic assumptions commonly used by other methods. In addition, the method can be expanded to 3D directly without modification because the underlying graphical framework and graph partition optimization method fit naturally with the image grid structure.

  15. Segmenting lung fields in serial chest radiographs using both population-based and patient-specific shape statistics.

    PubMed

    Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D

    2008-04-01

    This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.

  16. Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.

    PubMed

    Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias

    2016-01-01

    To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.

  17. The Spiral Arm Segments of the Galaxy within 3 kpc from the Sun: A Statistical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griv, Evgeny; Jiang, Ing-Guey; Hou, Li-Gang, E-mail: griv@bgu.ac.il

    As can be reasonably expected, upcoming large-scale APOGEE, GAIA, GALAH, LAMOST, and WEAVE stellar spectroscopic surveys will yield rather noisy Galactic distributions of stars. In view of the possibility of employing these surveys, our aim is to present a statistical method to extract information about the spiral structure of the Galaxy from currently available data, and to demonstrate the effectiveness of this method. The model differs from previous works studying how objects are distributed in space in its calculation of the statistical significance of the hypothesis that some of the objects are actually concentrated in a spiral. A statistical analysismore » of the distribution of cold dust clumps within molecular clouds, H ii regions, Cepheid stars, and open clusters in the nearby Galactic disk within 3 kpc from the Sun is carried out. As an application of the method, we obtain distances between the Sun and the centers of the neighboring Sagittarius arm segment, the Orion arm segment in which the Sun is located, and the Perseus arm segment. Pitch angles of the logarithmic spiral segments and their widths are also estimated. The hypothesis that the collected objects accidentally form spirals is refuted with almost 100% statistical confidence. We show that these four independent distributions of young objects lead to essentially the same results. We also demonstrate that our newly deduced values of the mean distances and pitch angles for the segments are not too far from those found recently by Reid et al. using VLBI-based trigonometric parallaxes of massive star-forming regions.« less

  18. Multi-scales region segmentation for ROI separation in digital mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Dapeng; Zhang, Di; Li, Yue; Wang, Wei

    2017-02-01

    Mammography is currently the most effective imaging modality used by radiologists for the screening of breast cancer. Segmentation is one of the key steps in the process of developing anatomical models for calculation of safe medical dose of radiation. This paper explores the potential of the statistical region merging segmentation technique for Breast segmentation in digital mammograms. First, the mammograms are pre-processing for regions enhancement, then the enhanced images are segmented using SRM with multi scales, finally these segmentations are combined for region of interest (ROI) separation and edge detection. The proposed algorithm uses multi-scales region segmentation in order to: separate breast region from background region, region edge detection and ROIs separation. The experiments are performed using a data set of mammograms from different patients, demonstrating the validity of the proposed criterion. Results show that, the statistical region merging segmentation algorithm actually can work on the segmentation of medical image and more accurate than another methods. And the outcome shows that the technique has a great potential to become a method of choice for segmentation of mammograms.

  19. A statistical method for lung tumor segmentation uncertainty in PET images based on user inference.

    PubMed

    Zheng, Chaojie; Wang, Xiuying; Feng, Dagan

    2015-01-01

    PET has been widely accepted as an effective imaging modality for lung tumor diagnosis and treatment. However, standard criteria for delineating tumor boundary from PET are yet to develop largely due to relatively low quality of PET images, uncertain tumor boundary definition, and variety of tumor characteristics. In this paper, we propose a statistical solution to segmentation uncertainty on the basis of user inference. We firstly define the uncertainty segmentation band on the basis of segmentation probability map constructed from Random Walks (RW) algorithm; and then based on the extracted features of the user inference, we use Principle Component Analysis (PCA) to formulate the statistical model for labeling the uncertainty band. We validated our method on 10 lung PET-CT phantom studies from the public RIDER collections [1] and 16 clinical PET studies where tumors were manually delineated by two experienced radiologists. The methods were validated using Dice similarity coefficient (DSC) to measure the spatial volume overlap. Our method achieved an average DSC of 0.878 ± 0.078 on phantom studies and 0.835 ± 0.039 on clinical studies.

  20. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  1. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  2. A multi-object statistical atlas adaptive for deformable registration errors in anomalous medical image segmentation

    NASA Astrophysics Data System (ADS)

    Botter Martins, Samuel; Vallin Spina, Thiago; Yasuda, Clarissa; Falcão, Alexandre X.

    2017-02-01

    Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.

  3. Semiautomatic tumor segmentation with multimodal images in a conditional random field framework.

    PubMed

    Hu, Yu-Chi; Grossberg, Michael; Mageras, Gikas

    2016-04-01

    Volumetric medical images of a single subject can be acquired using different imaging modalities, such as computed tomography, magnetic resonance imaging (MRI), and positron emission tomography. In this work, we present a semiautomatic segmentation algorithm that can leverage the synergies between different image modalities while integrating interactive human guidance. The algorithm provides a statistical segmentation framework partly automating the segmentation task while still maintaining critical human oversight. The statistical models presented are trained interactively using simple brush strokes to indicate tumor and nontumor tissues and using intermediate results within a patient's image study. To accomplish the segmentation, we construct the energy function in the conditional random field (CRF) framework. For each slice, the energy function is set using the estimated probabilities from both user brush stroke data and prior approved segmented slices within a patient study. The progressive segmentation is obtained using a graph-cut-based minimization. Although no similar semiautomated algorithm is currently available, we evaluated our method with an MRI data set from Medical Image Computing and Computer Assisted Intervention Society multimodal brain segmentation challenge (BRATS 2012 and 2013) against a similar fully automatic method based on CRF and a semiautomatic method based on grow-cut, and our method shows superior performance.

  4. Paroxysmal atrial fibrillation prediction method with shorter HRV sequences.

    PubMed

    Boon, K H; Khalil-Hani, M; Malarvili, M B; Sia, C W

    2016-10-01

    This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Automated segmentation of ultrasonic breast lesions using statistical texture classification and active contour based on probability distance.

    PubMed

    Liu, Bo; Cheng, H D; Huang, Jianhua; Tian, Jiawei; Liu, Jiafeng; Tang, Xianglong

    2009-08-01

    Because of its complicated structure, low signal/noise ratio, low contrast and blurry boundaries, fully automated segmentation of a breast ultrasound (BUS) image is a difficult task. In this paper, a novel segmentation method for BUS images without human intervention is proposed. Unlike most published approaches, the proposed method handles the segmentation problem by using a two-step strategy: ROI generation and ROI segmentation. First, a well-trained texture classifier categorizes the tissues into different classes, and the background knowledge rules are used for selecting the regions of interest (ROIs) from them. Second, a novel probability distance-based active contour model is applied for segmenting the ROIs and finding the accurate positions of the breast tumors. The active contour model combines both global statistical information and local edge information, using a level set approach. The proposed segmentation method was performed on 103 BUS images (48 benign and 55 malignant). To validate the performance, the results were compared with the corresponding tumor regions marked by an experienced radiologist. Three error metrics, true-positive ratio (TP), false-negative ratio (FN) and false-positive ratio (FP) were used for measuring the performance of the proposed method. The final results (TP = 91.31%, FN = 8.69% and FP = 7.26%) demonstrate that the proposed method can segment BUS images efficiently, quickly and automatically.

  6. Multiresolution multiscale active mask segmentation of fluorescence microscope images

    NASA Astrophysics Data System (ADS)

    Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2009-08-01

    We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.

  7. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    PubMed Central

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  8. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  9. Image segmentation by hierarchial agglomeration of polygons using ecological statistics

    DOEpatents

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-04-23

    A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.

  10. Robust tissue-air volume segmentation of MR images based on the statistics of phase and magnitude: Its applications in the display of susceptibility-weighted imaging of the brain.

    PubMed

    Du, Yiping P; Jin, Zhaoyang

    2009-10-01

    To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.

  11. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  12. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  13. New auto-segment method of cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Wang, Weijiang; Shen, Tingzhi; Dang, Hua

    2007-12-01

    A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.

  14. Self-correcting multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Wilford, Andrew; Guo, Liang

    2016-03-01

    In multi-atlas segmentation, one typically registers several atlases to the new image, and their respective segmented label images are transformed and fused to form the final segmentation. After each registration, the quality of the registration is reflected by the single global value: the final registration cost. Ideally, if the quality of the registration can be evaluated at each point, independent of the registration process, which also provides a direction in which the deformation can further be improved, the overall segmentation performance can be improved. We propose such a self-correcting multi-atlas segmentation method. The method is applied on hippocampus segmentation from brain images and statistically significantly improvement is observed.

  15. Fully automatic segmentation of the femur from 3D-CT images using primitive shape recognition and statistical shape models.

    PubMed

    Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki

    2014-03-01

    Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.

  16. Sparse intervertebral fence composition for 3D cervical vertebra segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.

  17. TU-AB-202-11: Tumor Segmentation by Fusion of Multi-Tracer PET Images Using Copula Based Statistical Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapuyade-Lahorgue, J; Ruan, S; Li, H

    Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less

  18. Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure.

    PubMed

    Cunningham, Ryan J; Harding, Peter J; Loram, Ian D

    2017-02-01

    Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.

  19. University and Student Segmentation: Multilevel Latent-Class Analysis of Students' Attitudes towards Research Methods and Statistics

    ERIC Educational Resources Information Center

    Mutz, Rudiger; Daniel, Hans-Dieter

    2013-01-01

    Background: It is often claimed that psychology students' attitudes towards research methods and statistics affect course enrolment, persistence, achievement, and course climate. However, the inter-institutional variability has been widely neglected in the research on students' attitudes towards research methods and statistics, but it is important…

  20. Color edges extraction using statistical features and automatic threshold technique: application to the breast cancer cells.

    PubMed

    Ben Chaabane, Salim; Fnaiech, Farhat

    2014-01-23

    Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted. A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components. Breast cancer cell images were used to evaluate the performance of the proposed method both quantitatively and qualitatively. Hence, a visual and a numerical assessment based on the probability of correct classification (PC), the false classification (Pf), and the classification accuracy (Sens(%)) are presented and compared with existing techniques. The proposed method shows its superiority in the detection of points which really belong to the cells, and also the facility of counting the number of the processed cells. Computer simulations highlight that the proposed method substantially enhances the segmented image with smaller error rates better than other existing algorithms under the same settings (patterns and parameters). Moreover, it provides high classification accuracy, reaching the rate of 97.94%. Additionally, the segmentation method may be extended to other medical imaging types having similar properties.

  1. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    PubMed Central

    Beichel, Reinhard R.; Van Tol, Markus; Ulrich, Ethan J.; Bauer, Christian; Chang, Tangel; Plichta, Kristin A.; Smith, Brian J.; Sunderland, John J.; Graham, Michael M.; Sonka, Milan; Buatti, John M.

    2016-01-01

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction. PMID:27277044

  2. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beichel, Reinhard R., E-mail: reinhard-beichel@uiowa.edu; Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242; Department of Internal Medicine, University of Iowa, Iowa City, Iowa 52242

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behaviormore » of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.« less

  3. Primal/dual linear programming and statistical atlases for cartilage segmentation.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Paragios, Nikos; Glaser, Christian; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel approach for automatic segmentation of cartilage using a statistical atlas and efficient primal/dual linear programming. To this end, a novel statistical atlas construction is considered from registered training examples. Segmentation is then solved through registration which aims at deforming the atlas such that the conditional posterior of the learned (atlas) density is maximized with respect to the image. Such a task is reformulated using a discrete set of deformations and segmentation becomes equivalent to finding the set of local deformations which optimally match the model to the image. We evaluate our method on 56 MRI data sets (28 used for the model and 28 used for evaluation) and obtain a fully automatic segmentation of patella cartilage volume with an overlap ratio of 0.84 with a sensitivity and specificity of 94.06% and 99.92%, respectively.

  4. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  5. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  6. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    NASA Astrophysics Data System (ADS)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  7. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  8. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach.

    PubMed

    Beichel, Reinhard R; Van Tol, Markus; Ulrich, Ethan J; Bauer, Christian; Chang, Tangel; Plichta, Kristin A; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M

    2016-06-01

    The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the "just-enough-interaction" principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.

  9. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  10. MIMoSA: An Automated Method for Intermodal Segmentation Analysis of Multiple Sclerosis Brain Lesions.

    PubMed

    Valcarcel, Alessandra M; Linn, Kristin A; Vandekar, Simon N; Satterthwaite, Theodore D; Muschelli, John; Calabresi, Peter A; Pham, Dzung L; Martin, Melissa Lynne; Shinohara, Russell T

    2018-03-08

    Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WMLs) in multiple sclerosis. While WMLs have been studied for over two decades using MRI, automated segmentation remains challenging. Although the majority of statistical techniques for the automated segmentation of WMLs are based on single imaging modalities, recent advances have used multimodal techniques for identifying WMLs. Complementary modalities emphasize different tissue properties, which help identify interrelated features of lesions. Method for Inter-Modal Segmentation Analysis (MIMoSA), a fully automatic lesion segmentation algorithm that utilizes novel covariance features from intermodal coupling regression in addition to mean structure to model the probability lesion is contained in each voxel, is proposed. MIMoSA was validated by comparison with both expert manual and other automated segmentation methods in two datasets. The first included 98 subjects imaged at Johns Hopkins Hospital in which bootstrap cross-validation was used to compare the performance of MIMoSA against OASIS and LesionTOADS, two popular automatic segmentation approaches. For a secondary validation, a publicly available data from a segmentation challenge were used for performance benchmarking. In the Johns Hopkins study, MIMoSA yielded average Sørensen-Dice coefficient (DSC) of .57 and partial AUC of .68 calculated with false positive rates up to 1%. This was superior to performance using OASIS and LesionTOADS. The proposed method also performed competitively in the segmentation challenge dataset. MIMoSA resulted in statistically significant improvements in lesion segmentation performance compared with LesionTOADS and OASIS, and performed competitively in an additional validation study. Copyright © 2018 by the American Society of Neuroimaging.

  11. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients

    PubMed Central

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Purpose: Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. Methods: A template database of 195 (81 males, 114 females; age range 32–67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Results: Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, −4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland–Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. Conclusions: The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study. PMID:26745947

  12. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients.

    PubMed

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland-Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study.

  13. The discrimination of sea ice types using SAR backscatter statistics

    NASA Technical Reports Server (NTRS)

    Shuchman, Robert A.; Wackerman, Christopher C.; Maffett, Andrew L.; Onstott, Robert G.; Sutherland, Laura L.

    1989-01-01

    X-band (HH) synthetic aperture radar (SAR) data of sea ice collected during the Marginal Ice Zone Experiment in March and April of 1987 was statistically analyzed with respect to discriminating open water, first-year ice, multiyear ice, and Odden. Odden are large expanses of nilas ice that rapidly form in the Greenland Sea and transform into pancake ice. A first-order statistical analysis indicated that mean versus variance can segment out open water and first-year ice, and skewness versus modified skewness can segment the Odden and multilayer categories. In additions to first-order statistics, a model has been generated for the distribution function of the SAR ice data. Segmentation of ice types was also attempted using textural measurements. In this case, the general co-occurency matrix was evaluated. The textural method did not generate better results than the first-order statistical approach.

  14. Automatic liver segmentation in computed tomography using general-purpose shape modeling methods.

    PubMed

    Spinczyk, Dominik; Krasoń, Agata

    2018-05-29

    Liver segmentation in computed tomography is required in many clinical applications. The segmentation methods used can be classified according to a number of criteria. One important criterion for method selection is the shape representation of the segmented organ. The aim of the work is automatic liver segmentation using general purpose shape modeling methods. As part of the research, methods based on shape information at various levels of advancement were used. The single atlas based segmentation method was used as the simplest shape-based method. This method is derived from a single atlas using the deformable free-form deformation of the control point curves. Subsequently, the classic and modified Active Shape Model (ASM) was used, using medium body shape models. As the most advanced and main method generalized statistical shape models, Gaussian Process Morphable Models was used, which are based on multi-dimensional Gaussian distributions of the shape deformation field. Mutual information and sum os square distance were used as similarity measures. The poorest results were obtained for the single atlas method. For the ASM method in 10 analyzed cases for seven test images, the Dice coefficient was above 55[Formula: see text], of which for three of them the coefficient was over 70[Formula: see text], which placed the method in second place. The best results were obtained for the method of generalized statistical distribution of the deformation field. The DICE coefficient for this method was 88.5[Formula: see text] CONCLUSIONS: This value of 88.5 [Formula: see text] Dice coefficient can be explained by the use of general-purpose shape modeling methods with a large variance of the shape of the modeled object-the liver and limitations on the size of our training data set, which was limited to 10 cases. The obtained results in presented fully automatic method are comparable with dedicated methods for liver segmentation. In addition, the deforamtion features of the model can be modeled mathematically by using various kernel functions, which allows to segment the liver on a comparable level using a smaller learning set.

  15. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  16. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  17. Degraded Chinese rubbing images thresholding based on local first-order statistics

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Hou, Ling-Ying; Huang, Han

    2017-06-01

    It is a necessary step for Chinese character segmentation from degraded document images in Optical Character Recognizer (OCR); however, it is challenging due to various kinds of noising in such an image. In this paper, we present three local first-order statistics method that had been adaptive thresholding for segmenting text and non-text of Chinese rubbing image. Both visual inspection and numerically investigate for the segmentation results of rubbing image had been obtained. In experiments, it obtained better results than classical techniques in the binarization of real Chinese rubbing image and PHIBD 2012 datasets.

  18. Mammographic enhancement with combining local statistical measures and sliding band filter for improved mass segmentation in mammograms

    NASA Astrophysics Data System (ADS)

    Kim, Dae Hoe; Choi, Jae Young; Choi, Seon Hyeong; Ro, Yong Man

    2012-03-01

    In this study, a novel mammogram enhancement solution is proposed, aiming to improve the quality of subsequent mass segmentation in mammograms. It has been widely accepted that characteristics of masses are usually hyper-dense or uniform density with respect to its background. Also, their core parts are likely to have high-intensity values while the values of intensity tend to be decreased as the distance to core parts increases. Based on the aforementioned observations, we develop a new and effective mammogram enhancement method by combining local statistical measurements and Sliding Band Filtering (SBF). By effectively combining local statistical measurements and SBF, we are able to improve the contrast of the bright and smooth regions (which represent potential mass regions), as well as, at the same time, the regions where their surrounding gradients are converging to the centers of regions of interest. In this study, 89 mammograms were collected from the public MAIS database (DB) to demonstrate the effectiveness of the proposed enhancement solution in terms of improving mass segmentation. As for a segmentation method, widely used contour-based segmentation approach was employed. The contour-based method in conjunction with the proposed enhancement solution achieved overall detection accuracy of 92.4% with a total of 85 correct cases. On the other hand, without using our enhancement solution, overall detection accuracy of the contour-based method was only 78.3%. In addition, experimental results demonstrated the feasibility of our enhancement solution for the purpose of improving detection accuracy on mammograms containing dense parenchymal patterns.

  19. Reproducible segmentation of white matter hyperintensities using a new statistical definition.

    PubMed

    Damangir, Soheil; Westman, Eric; Simmons, Andrew; Vrenken, Hugo; Wahlund, Lars-Olof; Spulber, Gabriela

    2017-06-01

    We present a method based on a proposed statistical definition of white matter hyperintensities (WMH), which can work with any combination of conventional magnetic resonance (MR) sequences without depending on manually delineated samples. T1-weighted, T2-weighted, FLAIR, and PD sequences acquired at 1.5 Tesla from 119 subjects from the Kings Health Partners-Dementia Case Register (healthy controls, mild cognitive impairment, Alzheimer's disease) were used. The segmentation was performed using a proposed definition for WMH based on the one-tailed Kolmogorov-Smirnov test. The presented method was verified, given all possible combinations of input sequences, against manual segmentations and a high similarity (Dice 0.85-0.91) was observed. Comparing segmentations with different input sequences to one another also yielded a high similarity (Dice 0.83-0.94) that exceeded intra-rater similarity (Dice 0.75-0.91). We compared the results with those of other available methods and showed that the segmentation based on the proposed definition has better accuracy and reproducibility in the test dataset used. Overall, the presented definition is shown to produce accurate results with higher reproducibility than manual delineation. This approach can be an alternative to other manual or automatic methods not only because of its accuracy, but also due to its good reproducibility.

  20. University and student segmentation: multilevel latent-class analysis of students' attitudes towards research methods and statistics.

    PubMed

    Mutz, Rüdiger; Daniel, Hans-Dieter

    2013-06-01

    It is often claimed that psychology students' attitudes towards research methods and statistics affect course enrollment, persistence, achievement, and course climate. However, the inter-institutional variability has been widely neglected in the research on students' attitudes towards research methods and statistics, but it is important for didactic purposes (heterogeneity of the student population). The paper presents a scale based on findings of the social psychology of attitudes (polar and emotion-based concept) in conjunction with a method for capturing beginning university students' attitudes towards research methods and statistics and identifying the proportion of students having positive attitudes at the institutional level. The study based on a re-analysis of a nationwide survey in Germany in August 2000 of all psychology students that enrolled in fall 1999/2000 (N= 1,490) and N= 44 universities. Using multilevel latent-class analysis (MLLCA), the aim was to group students in different student attitude types and at the same time to obtain university segments based on the incidences of the different student attitude types. Four student latent clusters were found that can be ranked on a bipolar attitude dimension. Membership in a cluster was predicted by age, grade point average (GPA) on school-leaving exam, and personality traits. In addition, two university segments were found: universities with an average proportion of students with positive attitudes and universities with a high proportion of students with positive attitudes (excellent segment). As psychology students make up a very heterogeneous group, the use of multiple learning activities as opposed to the classical lecture course is required. © 2011 The British Psychological Society.

  1. 3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.

    PubMed

    Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin

    2012-07-01

    Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational segmentation method was able to better disambiguate the tumor from the surrounding tissue.

  2. Vehicle track segmentation using higher order random fields

    DOE PAGES

    Quach, Tu -Thach

    2017-01-09

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  3. Vehicle track segmentation using higher order random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quach, Tu -Thach

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  4. Region growing using superpixels with learned shape prior

    NASA Astrophysics Data System (ADS)

    Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro

    2017-11-01

    Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.

  5. Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index1

    PubMed Central

    Zou, Kelly H.; Warfield, Simon K.; Bharatha, Aditya; Tempany, Clare M.C.; Kaus, Michael R.; Haker, Steven J.; Wells, William M.; Jolesz, Ferenc A.; Kikinis, Ron

    2005-01-01

    Rationale and Objectives To examine a statistical validation method based on the spatial overlap between two sets of segmentations of the same anatomy. Materials and Methods The Dice similarity coefficient (DSC) was used as a statistical validation metric to evaluate the performance of both the reproducibility of manual segmentations and the spatial overlap accuracy of automated probabilistic fractional segmentation of MR images, illustrated on two clinical examples. Example 1: 10 consecutive cases of prostate brachytherapy patients underwent both preoperative 1.5T and intraoperative 0.5T MR imaging. For each case, 5 repeated manual segmentations of the prostate peripheral zone were performed separately on preoperative and on intraoperative images. Example 2: A semi-automated probabilistic fractional segmentation algorithm was applied to MR imaging of 9 cases with 3 types of brain tumors. DSC values were computed and logit-transformed values were compared in the mean with the analysis of variance (ANOVA). Results Example 1: The mean DSCs of 0.883 (range, 0.876–0.893) with 1.5T preoperative MRI and 0.838 (range, 0.819–0.852) with 0.5T intraoperative MRI (P < .001) were within and at the margin of the range of good reproducibility, respectively. Example 2: Wide ranges of DSC were observed in brain tumor segmentations: Meningiomas (0.519–0.893), astrocytomas (0.487–0.972), and other mixed gliomas (0.490–0.899). Conclusion The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation. We observed generally satisfactory but variable validation results in two clinical applications. This metric may be adapted for similar validation tasks. PMID:14974593

  6. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  7. Spatial Statistics for Segmenting Histological Structures in H&E Stained Tissue Images.

    PubMed

    Nguyen, Luong; Tosun, Akif Burak; Fine, Jeffrey L; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra

    2017-07-01

    Segmenting a broad class of histological structures in transmitted light and/or fluorescence-based images is a prerequisite for determining the pathological basis of cancer, elucidating spatial interactions between histological structures in tumor microenvironments (e.g., tumor infiltrating lymphocytes), facilitating precision medicine studies with deep molecular profiling, and providing an exploratory tool for pathologists. This paper focuses on segmenting histological structures in hematoxylin- and eosin-stained images of breast tissues, e.g., invasive carcinoma, carcinoma in situ, atypical and normal ducts, adipose tissue, and lymphocytes. We propose two graph-theoretic segmentation methods based on local spatial color and nuclei neighborhood statistics. For benchmarking, we curated a data set of 232 high-power field breast tissue images together with expertly annotated ground truth. To accurately model the preference for histological structures (ducts, vessels, tumor nets, adipose, etc.) over the remaining connective tissue and non-tissue areas in ground truth annotations, we propose a new region-based score for evaluating segmentation algorithms. We demonstrate the improvement of our proposed methods over the state-of-the-art algorithms in both region- and boundary-based performance measures.

  8. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  9. Breast mass segmentation in mammography using plane fitting and dynamic programming.

    PubMed

    Song, Enmin; Jiang, Luan; Jin, Renchao; Zhang, Lin; Yuan, Yuan; Li, Qiang

    2009-07-01

    Segmentation is an important and challenging task in a computer-aided diagnosis (CAD) system. Accurate segmentation could improve the accuracy in lesion detection and characterization. The objective of this study is to develop and test a new segmentation method that aims at improving the performance level of breast mass segmentation in mammography, which could be used to provide accurate features for classification. This automated segmentation method consists of two main steps and combines the edge gradient, the pixel intensity, as well as the shape characteristics of the lesions to achieve good segmentation results. First, a plane fitting method was applied to a background-trend corrected region-of-interest (ROI) of a mass to obtain the edge candidate points. Second, dynamic programming technique was used to find the "optimal" contour of the mass from the edge candidate points. Area-based similarity measures based on the radiologist's manually marked annotation and the segmented region were employed as criteria to evaluate the performance level of the segmentation method. With the evaluation criteria, the new method was compared with 1) the dynamic programming method developed by Timp and Karssemeijer, and 2) the normalized cut segmentation method, based on 337 ROIs extracted from a publicly available image database. The experimental results indicate that our segmentation method can achieve a higher performance level than the other two methods, and the improvements in segmentation performance level were statistically significant. For instance, the mean overlap percentage for the new algorithm was 0.71, whereas those for Timp's dynamic programming method and the normalized cut segmentation method were 0.63 (P < .001) and 0.61 (P < .001), respectively. We developed a new segmentation method by use of plane fitting and dynamic programming, which achieved a relatively high performance level. The new segmentation method would be useful for improving the accuracy of computerized detection and classification of breast cancer in mammography.

  10. A Unified Framework for Brain Segmentation in MR Images

    PubMed Central

    Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.

    2015-01-01

    Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978

  11. An Approach for Reducing the Error Rate in Automated Lung Segmentation

    PubMed Central

    Gill, Gurman; Beichel, Reinhard R.

    2016-01-01

    Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897

  12. CT image segmentation methods for bone used in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. The Statistical Segment Length of DNA: Opportunities for Biomechanical Modeling in Polymer Physics and Next-Generation Genomics.

    PubMed

    Dorfman, Kevin D

    2018-02-01

    The development of bright bisintercalating dyes for deoxyribonucleic acid (DNA) in the 1990s, most notably YOYO-1, revolutionized the field of polymer physics in the ensuing years. These dyes, in conjunction with modern molecular biology techniques, permit the facile observation of polymer dynamics via fluorescence microscopy and thus direct tests of different theories of polymer dynamics. At the same time, they have played a key role in advancing an emerging next-generation method known as genome mapping in nanochannels. The effect of intercalation on the bending energy of DNA as embodied by a change in its statistical segment length (or, alternatively, its persistence length) has been the subject of significant controversy. The precise value of the statistical segment length is critical for the proper interpretation of polymer physics experiments and controls the phenomena underlying the aforementioned genomics technology. In this perspective, we briefly review the model of DNA as a wormlike chain and a trio of methods (light scattering, optical or magnetic tweezers, and atomic force microscopy (AFM)) that have been used to determine the statistical segment length of DNA. We then outline the disagreement in the literature over the role of bisintercalation on the bending energy of DNA, and how a multiscale biomechanical approach could provide an important model for this scientifically and technologically relevant problem.

  14. Atlas-based liver segmentation and hepatic fat-fraction assessment for clinical trials.

    PubMed

    Yan, Zhennan; Zhang, Shaoting; Tan, Chaowei; Qin, Hongxing; Belaroussi, Boubakeur; Yu, Hui Jing; Miller, Colin; Metaxas, Dimitris N

    2015-04-01

    Automated assessment of hepatic fat-fraction is clinically important. A robust and precise segmentation would enable accurate, objective and consistent measurement of hepatic fat-fraction for disease quantification, therapy monitoring and drug development. However, segmenting the liver in clinical trials is a challenging task due to the variability of liver anatomy as well as the diverse sources the images were acquired from. In this paper, we propose an automated and robust framework for liver segmentation and assessment. It uses single statistical atlas registration to initialize a robust deformable model to obtain fine segmentation. Fat-fraction map is computed by using chemical shift based method in the delineated region of liver. This proposed method is validated on 14 abdominal magnetic resonance (MR) volumetric scans. The qualitative and quantitative comparisons show that our proposed method can achieve better segmentation accuracy with less variance comparing with two other atlas-based methods. Experimental results demonstrate the promises of our assessment framework. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Lung lobe segmentation based on statistical atlas and graph cuts

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.

  16. Segmentation of knee cartilage by using a hierarchical active shape model based on multi-resolution transforms in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    León, Madeleine; Escalante-Ramirez, Boris

    2013-11-01

    Knee osteoarthritis (OA) is characterized by the morphological degeneration of cartilage. Efficient segmentation of cartilage is important for cartilage damage diagnosis and to support therapeutic responses. We present a method for knee cartilage segmentation in magnetic resonance images (MRI). Our method incorporates the Hermite Transform to obtain a hierarchical decomposition of contours which describe knee cartilage shapes. Then, we compute a statistical model of the contour of interest from a set of training images. Thereby, our Hierarchical Active Shape Model (HASM) captures a large range of shape variability even from a small group of training samples, improving segmentation accuracy. The method was trained with a training set of 16- MRI of knee and tested with leave-one-out method.

  17. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  18. Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.

    PubMed

    Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R

    2012-06-01

    The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.

  19. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion. PMID:24593721

  20. Evolution of semilocal string networks. II. Velocity estimators

    NASA Astrophysics Data System (ADS)

    Lopez-Eiguren, A.; Urrestilla, J.; Achúcarro, A.; Avgoustidis, A.; Martins, C. J. A. P.

    2017-07-01

    We continue a comprehensive numerical study of semilocal string networks and their cosmological evolution. These can be thought of as hybrid networks comprised of (nontopological) string segments, whose core structure is similar to that of Abelian Higgs vortices, and whose ends have long-range interactions and behavior similar to that of global monopoles. Our study provides further evidence of a linear scaling regime, already reported in previous studies, for the typical length scale and velocity of the network. We introduce a new algorithm to identify the position of the segment cores. This allows us to determine the length and velocity of each individual segment and follow their evolution in time. We study the statistical distribution of segment lengths and velocities for radiation- and matter-dominated evolution in the regime where the strings are stable. Our segment detection algorithm gives higher length values than previous studies based on indirect detection methods. The statistical distribution shows no evidence of (anti)correlation between the speed and the length of the segments.

  1. Lung segmentation from HRCT using united geometric active contours

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  2. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  3. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    PubMed

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  4. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    NASA Astrophysics Data System (ADS)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  5. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung

    PubMed Central

    Guo, Shengwen; Fei, Baowei

    2013-01-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531

  6. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    ERIC Educational Resources Information Center

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  7. A vessel segmentation method for multi-modality angiographic images based on multi-scale filtering and statistical models.

    PubMed

    Lu, Pei; Xia, Jun; Li, Zhicheng; Xiong, Jing; Yang, Jian; Zhou, Shoujun; Wang, Lei; Chen, Mingyang; Wang, Cheng

    2016-11-08

    Accurate segmentation of blood vessels plays an important role in the computer-aided diagnosis and interventional treatment of vascular diseases. The statistical method is an important component of effective vessel segmentation; however, several limitations discourage the segmentation effect, i.e., dependence of the image modality, uneven contrast media, bias field, and overlapping intensity distribution of the object and background. In addition, the mixture models of the statistical methods are constructed relaying on the characteristics of the image histograms. Thus, it is a challenging issue for the traditional methods to be available in vessel segmentation from multi-modality angiographic images. To overcome these limitations, a flexible segmentation method with a fixed mixture model has been proposed for various angiography modalities. Our method mainly consists of three parts. Firstly, multi-scale filtering algorithm was used on the original images to enhance vessels and suppress noises. As a result, the filtered data achieved a new statistical characteristic. Secondly, a mixture model formed by three probabilistic distributions (two Exponential distributions and one Gaussian distribution) was built to fit the histogram curve of the filtered data, where the expectation maximization (EM) algorithm was used for parameters estimation. Finally, three-dimensional (3D) Markov random field (MRF) were employed to improve the accuracy of pixel-wise classification and posterior probability estimation. To quantitatively evaluate the performance of the proposed method, two phantoms simulating blood vessels with different tubular structures and noises have been devised. Meanwhile, four clinical angiographic data sets from different human organs have been used to qualitatively validate the method. To further test the performance, comparison tests between the proposed method and the traditional ones have been conducted on two different brain magnetic resonance angiography (MRA) data sets. The results of the phantoms were satisfying, e.g., the noise was greatly suppressed, the percentages of the misclassified voxels, i.e., the segmentation error ratios, were no more than 0.3%, and the Dice similarity coefficients (DSCs) were above 94%. According to the opinions of clinical vascular specialists, the vessels in various data sets were extracted with high accuracy since complete vessel trees were extracted while lesser non-vessels and background were falsely classified as vessel. In the comparison experiments, the proposed method showed its superiority in accuracy and robustness for extracting vascular structures from multi-modality angiographic images with complicated background noises. The experimental results demonstrated that our proposed method was available for various angiographic data. The main reason was that the constructed mixture probability model could unitarily classify vessel object from the multi-scale filtered data of various angiography images. The advantages of the proposed method lie in the following aspects: firstly, it can extract the vessels with poor angiography quality, since the multi-scale filtering algorithm can improve the vessel intensity in the circumstance such as uneven contrast media and bias field; secondly, it performed well for extracting the vessels in multi-modality angiographic images despite various signal-noises; and thirdly, it was implemented with better accuracy, and robustness than the traditional methods. Generally, these traits declare that the proposed method would have significant clinical application.

  8. Active mask segmentation of fluorescence microscope images.

    PubMed

    Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena

    2009-08-01

    We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.

  9. Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.

    PubMed

    Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué

    2014-06-12

    Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.

  10. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    PubMed

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  11. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    PubMed

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78%]) and the radiologist 52% (95% CI: [38%, 66%]). OASIS obtains the estimated probability for each voxel to be part of a lesion by weighting each imaging modality with coefficient weights. These coefficients are explicit, obtained using standard model fitting techniques, and can be reused in other imaging studies. This fully automated method allows sensitive and specific detection of lesion presence and may be rapidly applied to large collections of images.

  12. Statistical segmentation of multidimensional brain datasets

    NASA Astrophysics Data System (ADS)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  13. Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.

    PubMed

    Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2015-10-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

  14. Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors

    NASA Astrophysics Data System (ADS)

    Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin

    2014-03-01

    One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.

  15. A flexible and robust approach for segmenting cell nuclei from 2D microscopy images using supervised learning and template matching

    PubMed Central

    Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.

    2013-01-01

    We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787

  16. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  17. Automated segmentation of the parotid gland based on atlas registration and machine learning: a longitudinal MRI study in head-and-neck radiation therapy.

    PubMed

    Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui; Zhou, Zhengyang; Yu, David S; Beitler, Jonathan J; Curran, Walter J; Liu, Tian

    2014-12-01

    To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RT MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Automatic cell detection and segmentation from H and E stained pathology slides using colorspace decorrelation stretching

    NASA Astrophysics Data System (ADS)

    Peikari, Mohammad; Martel, Anne L.

    2016-03-01

    Purpose: Automatic cell segmentation plays an important role in reliable diagnosis and prognosis of patients. Most of the state-of-the-art cell detection and segmentation techniques focus on complicated methods to subtract foreground cells from the background. In this study, we introduce a preprocessing method which leads to a better detection and segmentation results compared to a well-known state-of-the-art work. Method: We transform the original red-green-blue (RGB) space into a new space defined by the top eigenvectors of the RGB space. Stretching is done by manipulating the contrast of each pixel value to equalize the color variances. New pixel values are then inverse transformed to the original RGB space. This altered RGB image is then used to segment cells. Result: The validation of our method with a well-known state-of-the-art technique revealed a statistically significant improvement on an identical validation set. We achieved a mean F1-score of 0.901. Conclusion: Preprocessing steps to decorrelate colorspaces may improve cell segmentation performances.

  19. Interactive surface correction for 3D shape based segmentation

    NASA Astrophysics Data System (ADS)

    Schwarz, Tobias; Heimann, Tobias; Tetzlaff, Ralf; Rau, Anne-Mareike; Wolf, Ivo; Meinzer, Hans-Peter

    2008-03-01

    Statistical shape models have become a fast and robust method for segmentation of anatomical structures in medical image volumes. In clinical practice, however, pathological cases and image artifacts can lead to local deviations of the detected contour from the true object boundary. These deviations have to be corrected manually. We present an intuitively applicable solution for surface interaction based on Gaussian deformation kernels. The method is evaluated by two radiological experts on segmentations of the liver in contrast-enhanced CT images and of the left heart ventricle (LV) in MRI data. For both applications, five datasets are segmented automatically using deformable shape models, and the resulting surfaces are corrected manually. The interactive correction step improves the average surface distance against ground truth from 2.43mm to 2.17mm for the liver, and from 2.71mm to 1.34mm for the LV. We expect this method to raise the acceptance of automatic segmentation methods in clinical application.

  20. Incorporating User Input in Template-Based Segmentation

    PubMed Central

    Vidal, Camille; Beggs, Dale; Younes, Laurent; Jain, Sanjay K.; Jedynak, Bruno

    2015-01-01

    We present a simple and elegant method to incorporate user input in a template-based segmentation method for diseased organs. The user provides a partial segmentation of the organ of interest, which is used to guide the template towards its target. The user also highlights some elements of the background that should be excluded from the final segmentation. We derive by likelihood maximization a registration algorithm from a simple statistical image model in which the user labels are modeled as Bernoulli random variables. The resulting registration algorithm minimizes the sum of square differences between the binary template and the user labels, while preventing the template from shrinking, and penalizing for the inclusion of background elements into the final segmentation. We assess the performance of the proposed algorithm on synthetic images in which the amount of user annotation is controlled. We demonstrate our algorithm on the segmentation of the lungs of Mycobacterium tuberculosis infected mice from μCT images. PMID:26146532

  1. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  2. Infants' statistical learning: 2- and 5-month-olds' segmentation of continuous visual sequences.

    PubMed

    Slone, Lauren Krogh; Johnson, Scott P

    2015-05-01

    Past research suggests that infants have powerful statistical learning abilities; however, studies of infants' visual statistical learning offer differing accounts of the developmental trajectory of and constraints on this learning. To elucidate this issue, the current study tested the hypothesis that young infants' segmentation of visual sequences depends on redundant statistical cues to segmentation. A sample of 20 2-month-olds and 20 5-month-olds observed a continuous sequence of looming shapes in which unit boundaries were defined by both transitional probability and co-occurrence frequency. Following habituation, only 5-month-olds showed evidence of statistically segmenting the sequence, looking longer to a statistically improbable shape pair than to a probable pair. These results reaffirm the power of statistical learning in infants as young as 5 months but also suggest considerable development of statistical segmentation ability between 2 and 5 months of age. Moreover, the results do not support the idea that infants' ability to segment visual sequences based on transitional probabilities and/or co-occurrence frequencies is functional at the onset of visual experience, as has been suggested previously. Rather, this type of statistical segmentation appears to be constrained by the developmental state of the learner. Factors contributing to the development of statistical segmentation ability during early infancy, including memory and attention, are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Patellar segmentation from 3D magnetic resonance images using guided recursive ray-tracing for edge pattern detection

    NASA Astrophysics Data System (ADS)

    Cheng, Ruida; Jackson, Jennifer N.; McCreedy, Evan S.; Gandler, William; Eijkenboom, J. J. F. A.; van Middelkoop, M.; McAuliffe, Matthew J.; Sheehan, Frances T.

    2016-03-01

    The paper presents an automatic segmentation methodology for the patellar bone, based on 3D gradient recalled echo and gradient recalled echo with fat suppression magnetic resonance images. Constricted search space outlines are incorporated into recursive ray-tracing to segment the outer cortical bone. A statistical analysis based on the dependence of information in adjacent slices is used to limit the search in each image to between an outer and inner search region. A section based recursive ray-tracing mechanism is used to skip inner noise regions and detect the edge boundary. The proposed method achieves higher segmentation accuracy (0.23mm) than the current state-of-the-art methods with the average dice similarity coefficient of 96.0% (SD 1.3%) agreement between the auto-segmentation and ground truth surfaces.

  4. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  5. Machine learning in a graph framework for subcortical segmentation

    NASA Astrophysics Data System (ADS)

    Guo, Zhihui; Kashyap, Satyananda; Sonka, Milan; Oguz, Ipek

    2017-02-01

    Automated and reliable segmentation of subcortical structures from human brain magnetic resonance images is of great importance for volumetric and shape analyses in quantitative neuroimaging studies. However, poor boundary contrast and variable shape of these structures make the automated segmentation a tough task. We propose a 3D graph-based machine learning method, called LOGISMOS-RF, to segment the caudate and the putamen from brain MRI scans in a robust and accurate way. An atlas-based tissue classification and bias-field correction method is applied to the images to generate an initial segmentation for each structure. Then a 3D graph framework is utilized to construct a geometric graph for each initial segmentation. A locally trained random forest classifier is used to assign a cost to each graph node. The max-flow algorithm is applied to solve the segmentation problem. Evaluation was performed on a dataset of T1-weighted MRI's of 62 subjects, with 42 images used for training and 20 images for testing. For comparison, FreeSurfer, FSL and BRAINSCut approaches were also evaluated using the same dataset. Dice overlap coefficients and surface-to-surfaces distances between the automated segmentation and expert manual segmentations indicate the results of our method are statistically significantly more accurate than the three other methods, for both the caudate (Dice: 0.89 +/- 0.03) and the putamen (0.89 +/- 0.03).

  6. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  7. Automated Segmentation of the Parotid Gland Based on Atlas Registration and Machine Learning: A Longitudinal MRI Study in Head-and-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui

    Purpose: To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). Methods and Materials: The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RTmore » MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Results: Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. Conclusions: We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy.« less

  8. Scale-based fuzzy connectivity: a novel image segmentation methodology and its validation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    1999-05-01

    This paper extends a previously reported theory and algorithms for fuzzy connected object definition. It introduces `object scale' for determining the neighborhood size for defining affinity, the degree of local hanging togetherness between image elements. Object scale allows us to use a varying neighborhood size in different parts of the image. This paper argues that scale-based fuzzy connectivity is natural in object definition and demonstrates that this leads to a more effective object segmentation than without using scale in fuzzy concentrations. Affinity is described as consisting of a homogeneity-based and an object-feature- based component. Families of non scale-based and scale-based affinity relations are constructed. An effective method for giving a rough estimate of scale at different locations in the image is presented. The original theoretical and algorithmic framework remains more-or-less the same but considerably improved segmentations result. A quantitative statistical comparison between the non scale-based and the scale-based methods was made based on phantom images generated from patient MR brain studies by first segmenting the objects, and then by adding noise and blurring, and background component. Both the statistical and the subjective tests clearly indicate the superiority of scale- based method in capturing details and in robustness to noise.

  9. Estimation of Total Length of Femur from its Proximal and Distal Segmental Measurements of Disarticulated Femur Bones of Nepalese Population using Regression Equation Method.

    PubMed

    Khanal, Laxman; Shah, Sandip; Koirala, Sarun

    2017-03-01

    Length of long bones is taken as an important contributor for estimating one of the four elements of forensic anthropology i.e., stature of the individual. Since physical characteristics of the individual differ among different groups of population, population specific studies are needed for estimating the total length of femur from its segment measurements. Since femur is not always recovered intact in forensic cases, it was the aim of this study to derive regression equations from measurements of proximal and distal fragments in Nepalese population. A cross-sectional study was done among 60 dry femora (30 from each side) without sex determination in anthropometry laboratory. Along with maximum femoral length, four proximal and four distal segmental measurements were measured following the standard method with the help of osteometric board, measuring tape and digital Vernier's caliper. Bones with gross defects were excluded from the study. Measured values were recorded separately for right and left side. Statistical Package for Social Science (SPSS version 11.5) was used for statistical analysis. The value of segmental measurements were different between right and left side but statistical difference was not significant except for depth of medial condyle (p=0.02). All the measurements were positively correlated and found to have linear relationship with the femoral length. With the help of regression equation, femoral length can be calculated from the segmental measurements; and then femoral length can be used to calculate the stature of the individual. The data collected may contribute in the analysis of forensic bone remains in study population.

  10. Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.

    PubMed

    Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.

  11. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  12. Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.

    PubMed

    de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos

    2011-01-01

    In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  14. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    PubMed Central

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004

  15. Introduction of statistical information in a syntactic analyzer for document image recognition

    NASA Astrophysics Data System (ADS)

    Maroneze, André O.; Coüasnon, Bertrand; Lemaitre, Aurélie

    2011-01-01

    This paper presents an improvement to document layout analysis systems, offering a possible solution to Sayre's paradox (which states that an element "must be recognized before it can be segmented; and it must be segmented before it can be recognized"). This improvement, based on stochastic parsing, allows integration of statistical information, obtained from recognizers, during syntactic layout analysis. We present how this fusion of numeric and symbolic information in a feedback loop can be applied to syntactic methods to improve document description expressiveness. To limit combinatorial explosion during exploration of solutions, we devised an operator that allows optional activation of the stochastic parsing mechanism. Our evaluation on 1250 handwritten business letters shows this method allows the improvement of global recognition scores.

  16. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  17. Computer-Aided Diagnosis of Anterior Segment Eye Abnormalities using Visible Wavelength Image Analysis Based Machine Learning.

    PubMed

    S V, Mahesh Kumar; R, Gunasundari

    2018-06-02

    Eye disease is a major health problem among the elderly people. Cataract and corneal arcus are the major abnormalities that exist in the anterior segment eye region of aged people. Hence, computer-aided diagnosis of anterior segment eye abnormalities will be helpful for mass screening and grading in ophthalmology. In this paper, we propose a multiclass computer-aided diagnosis (CAD) system using visible wavelength (VW) eye images to diagnose anterior segment eye abnormalities. In the proposed method, the input VW eye images are pre-processed for specular reflection removal and the iris circle region is segmented using a circular Hough Transform (CHT)-based approach. The first-order statistical features and wavelet-based features are extracted from the segmented iris circle and used for classification. The Support Vector Machine (SVM) by Sequential Minimal Optimization (SMO) algorithm was used for the classification. In experiments, we used 228 VW eye images that belong to three different classes of anterior segment eye abnormalities. The proposed method achieved a predictive accuracy of 96.96% with 97% sensitivity and 99% specificity. The experimental results show that the proposed method has significant potential for use in clinical applications.

  18. A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times

    NASA Astrophysics Data System (ADS)

    Edmund, Jens M.; Kjer, Hans M.; Van Leemput, Koen; Hansen, Rasmus H.; Andersen, Jon AL; Andreasen, Daniel

    2014-12-01

    Radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, so-called MRI-only RT, would remove the systematic registration error between MR and computed tomography (CT), and provide co-registered MRI for assessment of treatment response and adaptive RT. Electron densities, however, need to be assigned to the MRI images for dose calculation and patient setup based on digitally reconstructed radiographs (DRRs). Here, we investigate the geometric and dosimetric performance for a number of popular voxel-based methods to generate a so-called pseudo CT (pCT). Five patients receiving cranial irradiation, each containing a co-registered MRI and CT scan, were included. An ultra short echo time MRI sequence for bone visualization was used. Six methods were investigated for three popular types of voxel-based approaches; (1) threshold-based segmentation, (2) Bayesian segmentation and (3) statistical regression. Each approach contained two methods. Approach 1 used bulk density assignment of MRI voxels into air, soft tissue and bone based on logical masks and the transverse relaxation time T2 of the bone. Approach 2 used similar bulk density assignments with Bayesian statistics including or excluding additional spatial information. Approach 3 used a statistical regression correlating MRI voxels with their corresponding CT voxels. A similar photon and proton treatment plan was generated for a target positioned between the nasal cavity and the brainstem for all patients. The CT agreement with the pCT of each method was quantified and compared with the other methods geometrically and dosimetrically using both a number of reported metrics and introducing some novel metrics. The best geometrical agreement with CT was obtained with the statistical regression methods which performed significantly better than the threshold and Bayesian segmentation methods (excluding spatial information). All methods agreed significantly better with CT than a reference water MRI comparison. The mean dosimetric deviation for photons and protons compared to the CT was about 2% and highest in the gradient dose region of the brainstem. Both the threshold based method and the statistical regression methods showed the highest dosimetrical agreement. Generation of pCTs using statistical regression seems to be the most promising candidate for MRI-only RT of the brain. Further, the total amount of different tissues needs to be taken into account for dosimetric considerations regardless of their correct geometrical position.

  19. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    PubMed

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  20. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  1. Automatic and manual segmentation of healthy retinas using high-definition optical coherence tomography.

    PubMed

    Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe

    2011-03-01

    This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.

  2. Detecting the borders between coding and non-coding DNA regions in prokaryotes based on recursive segmentation and nucleotide doublets statistics

    PubMed Central

    2012-01-01

    Background Detecting the borders between coding and non-coding regions is an essential step in the genome annotation. And information entropy measures are useful for describing the signals in genome sequence. However, the accuracies of previous methods of finding borders based on entropy segmentation method still need to be improved. Methods In this study, we first applied a new recursive entropic segmentation method on DNA sequences to get preliminary significant cuts. A 22-symbol alphabet is used to capture the differential composition of nucleotide doublets and stop codon patterns along three phases in both DNA strands. This process requires no prior training datasets. Results Comparing with the previous segmentation methods, the experimental results on three bacteria genomes, Rickettsia prowazekii, Borrelia burgdorferi and E.coli, show that our approach improves the accuracy for finding the borders between coding and non-coding regions in DNA sequences. Conclusions This paper presents a new segmentation method in prokaryotes based on Jensen-Rényi divergence with a 22-symbol alphabet. For three bacteria genomes, comparing to A12_JR method, our method raised the accuracy of finding the borders between protein coding and non-coding regions in DNA sequences. PMID:23282225

  3. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    NASA Astrophysics Data System (ADS)

    Barat, Christian; Phlypo, Ronald

    2010-12-01

    We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rueegsegger, Michael B.; Bach Cuadra, Meritxell; Pica, Alessia

    Purpose: Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors. Methods and Materials: Manual and automatic segmentations were compared for 17 patients, based on head computed tomography (CT) volume scans. A 3Dmore » statistical shape model of the cornea, lens, and sclera as well as of the optic disc position was developed. Furthermore, an active shape model was built to enable automatic fitting of the eye model to CT slice stacks. Cross-validation was performed based on leave-one-out tests for all training shapes by measuring dice coefficients and mean segmentation errors between automatic segmentation and manual segmentation by an expert. Results: Cross-validation revealed a dice similarity of 95% {+-} 2% for the sclera and cornea and 91% {+-} 2% for the lens. Overall, mean segmentation error was found to be 0.3 {+-} 0.1 mm. Average segmentation time was 14 {+-} 2 s on a standard personal computer. Conclusions: Our results show that the solution presented outperforms state-of-the-art methods in terms of accuracy, reliability, and robustness. Moreover, the eye model shape as well as its variability is learned from a training set rather than by making shape assumptions (eg, as with the spherical or elliptical model). Therefore, the model appears to be capable of modeling nonspherically and nonelliptically shaped eyes.« less

  5. Statistical optimisation techniques in fatigue signal editing problem

    NASA Astrophysics Data System (ADS)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.

    2015-02-01

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.

  6. Statistical optimisation techniques in fatigue signal editing problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nopiah, Z. M.; Osman, M. H.; Baharin, N.

    Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less

  7. Importance of reporting segmental bowel preparation scores during colonoscopy in clinical practice

    PubMed Central

    Jain, Deepanshu; Momeni, Mojdeh; Krishnaiah, Mahesh; Anand, Sury; Singhal, Shashideep

    2015-01-01

    AIM: To evaluate the impact of reporting bowel preparation using Boston Bowel Preparation Scale (BBPS) in clinical practice. METHODS: The study was a prospective observational cohort study which enrolled subjects reporting for screening colonoscopy. All subjects received a gallon of polyethylene glycol as bowel preparation regimen. After colonoscopy the endoscopists determined quality of bowel preparation using BBPS. Segmental scores were combined to calculate composite BBPS. Site and size of the polyps detected was recorded. Pathology reports were reviewed to determine advanced adenoma detection rates (AADR). Segmental AADR’s were calculated and categorized based on the segmental BBPS to determine the differential impact of bowel prep on AADR. RESULTS: Three hundred and sixty subjects were enrolled in the study with a mean age of 59.2 years, 36.3% males and 63.8% females. Four subjects with incomplete colonoscopy due BBPS of 0 in any segment were excluded. Based on composite BBPS subjects were divided into 3 groups; Group-0 (poor bowel prep, BBPS 0-3) n = 26 (7.3%), Group-1 (Suboptimal bowel prep, BBPS 4-6) n = 121 (34%) and Group-2 (Adequate bowel prep, BBPS 7-9) n = 209 (58.7%). AADR showed a linear trend through Group-1 to 3; with an AADR of 3.8%, 14.8% and 16.7% respectively. Also seen was a linear increasing trend in segmental AADR with improvement in segmental BBPS. There was statistical significant difference between AADR among Group 0 and 2 (3.8% vs 16.7%, P < 0.05), Group 1 and 2 (14.8% vs 16.7%, P < 0.05) and Group 0 and 1 (3.8% vs 14.8%, P < 0.05). χ2 method was used to compute P value for determining statistical significance. CONCLUSION: Segmental AADRs correlate with segmental BBPS. It is thus valuable to report segmental BBPS in colonoscopy reports in clinical practice. PMID:25852286

  8. Spectral embedding based active contour (SEAC) for lesion segmentation on breast dynamic contrast enhanced magnetic resonance imaging.

    PubMed

    Agner, Shannon C; Xu, Jun; Madabhushi, Anant

    2013-03-01

    Segmentation of breast lesions on dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) is the first step in lesion diagnosis in a computer-aided diagnosis framework. Because manual segmentation of such lesions is both time consuming and highly susceptible to human error and issues of reproducibility, an automated lesion segmentation method is highly desirable. Traditional automated image segmentation methods such as boundary-based active contour (AC) models require a strong gradient at the lesion boundary. Even when region-based terms are introduced to an AC model, grayscale image intensities often do not allow for clear definition of foreground and background region statistics. Thus, there is a need to find alternative image representations that might provide (1) strong gradients at the margin of the object of interest (OOI); and (2) larger separation between intensity distributions and region statistics for the foreground and background, which are necessary to halt evolution of the AC model upon reaching the border of the OOI. In this paper, the authors introduce a spectral embedding (SE) based AC (SEAC) for lesion segmentation on breast DCE-MRI. SE, a nonlinear dimensionality reduction scheme, is applied to the DCE time series in a voxelwise fashion to reduce several time point images to a single parametric image where every voxel is characterized by the three dominant eigenvectors. This parametric eigenvector image (PrEIm) representation allows for better capture of image region statistics and stronger gradients for use with a hybrid AC model, which is driven by both boundary and region information. They compare SEAC to ACs that employ fuzzy c-means (FCM) and principal component analysis (PCA) as alternative image representations. Segmentation performance was evaluated by boundary and region metrics as well as comparing lesion classification using morphological features from SEAC, PCA+AC, and FCM+AC. On a cohort of 50 breast DCE-MRI studies, PrEIm yielded overall better region and boundary-based statistics compared to the original DCE-MR image, FCM, and PCA based image representations. Additionally, SEAC outperformed a hybrid AC applied to both PCA and FCM image representations. Mean dice similarity coefficient (DSC) for SEAC was significantly better (DSC = 0.74 ± 0.21) than FCM+AC (DSC = 0.50 ± 0.32) and similar to PCA+AC (DSC = 0.73 ± 0.22). Boundary-based metrics of mean absolute difference and Hausdorff distance followed the same trends. Of the automated segmentation methods, breast lesion classification based on morphologic features derived from SEAC segmentation using a support vector machine classifier also performed better (AUC = 0.67 ± 0.05; p < 0.05) than FCM+AC (AUC = 0.50 ± 0.07), and PCA+AC (AUC = 0.49 ± 0.07). In this work, we presented SEAC, an accurate, general purpose AC segmentation tool that could be applied to any imaging domain that employs time series data. SE allows for projection of time series data into a PrEIm representation so that every voxel is characterized by the dominant eigenvectors, capturing the global and local time-intensity curve similarities in the data. This PrEIm allows for the calculation of strong tensor gradients and better region statistics than the original image intensities or alternative image representations such as PCA and FCM. The PrEIm also allows for building a more accurate hybrid AC scheme.

  9. Weakly supervised image semantic segmentation based on clustering superpixels

    NASA Astrophysics Data System (ADS)

    Yan, Xiong; Liu, Xiaohua

    2018-04-01

    In this paper, we propose an image semantic segmentation model which is trained from image-level labeled images. The proposed model starts with superpixel segmenting, and features of the superpixels are extracted by trained CNN. We introduce a superpixel-based graph followed by applying the graph partition method to group correlated superpixels into clusters. For the acquisition of inter-label correlations between the image-level labels in dataset, we not only utilize label co-occurrence statistics but also exploit visual contextual cues simultaneously. At last, we formulate the task of mapping appropriate image-level labels to the detected clusters as a problem of convex minimization. Experimental results on MSRC-21 dataset and LableMe dataset show that the proposed method has a better performance than most of the weakly supervised methods and is even comparable to fully supervised methods.

  10. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  11. Characterizing protein conformations by correlation analysis of coarse-grained contact matrices.

    PubMed

    Lindsay, Richard J; Siess, Jan; Lohry, David P; McGee, Trevor S; Ritchie, Jordan S; Johnson, Quentin R; Shen, Tongye

    2018-01-14

    We have developed a method to capture the essential conformational dynamics of folded biopolymers using statistical analysis of coarse-grained segment-segment contacts. Previously, the residue-residue contact analysis of simulation trajectories was successfully applied to the detection of conformational switching motions in biomolecular complexes. However, the application to large protein systems (larger than 1000 amino acid residues) is challenging using the description of residue contacts. Also, the residue-based method cannot be used to compare proteins with different sequences. To expand the scope of the method, we have tested several coarse-graining schemes that group a collection of consecutive residues into a segment. The definition of these segments may be derived from structural and sequence information, while the interaction strength of the coarse-grained segment-segment contacts is a function of the residue-residue contacts. We then perform covariance calculations on these coarse-grained contact matrices. We monitored how well the principal components of the contact matrices is preserved using various rendering functions. The new method was demonstrated to assist the reduction of the degrees of freedom for describing the conformation space, and it potentially allows for the analysis of a system that is approximately tenfold larger compared with the corresponding residue contact-based method. This method can also render a family of similar proteins into the same conformational space, and thus can be used to compare the structures of proteins with different sequences.

  12. Characterizing protein conformations by correlation analysis of coarse-grained contact matrices

    NASA Astrophysics Data System (ADS)

    Lindsay, Richard J.; Siess, Jan; Lohry, David P.; McGee, Trevor S.; Ritchie, Jordan S.; Johnson, Quentin R.; Shen, Tongye

    2018-01-01

    We have developed a method to capture the essential conformational dynamics of folded biopolymers using statistical analysis of coarse-grained segment-segment contacts. Previously, the residue-residue contact analysis of simulation trajectories was successfully applied to the detection of conformational switching motions in biomolecular complexes. However, the application to large protein systems (larger than 1000 amino acid residues) is challenging using the description of residue contacts. Also, the residue-based method cannot be used to compare proteins with different sequences. To expand the scope of the method, we have tested several coarse-graining schemes that group a collection of consecutive residues into a segment. The definition of these segments may be derived from structural and sequence information, while the interaction strength of the coarse-grained segment-segment contacts is a function of the residue-residue contacts. We then perform covariance calculations on these coarse-grained contact matrices. We monitored how well the principal components of the contact matrices is preserved using various rendering functions. The new method was demonstrated to assist the reduction of the degrees of freedom for describing the conformation space, and it potentially allows for the analysis of a system that is approximately tenfold larger compared with the corresponding residue contact-based method. This method can also render a family of similar proteins into the same conformational space, and thus can be used to compare the structures of proteins with different sequences.

  13. Statistical shape modeling of human cochlea: alignment and principal component analysis

    NASA Astrophysics Data System (ADS)

    Poznyakovskiy, Anton A.; Zahnert, Thomas; Fischer, Björn; Lasurashvili, Nikoloz; Kalaidzidis, Yannis; Mürbe, Dirk

    2013-02-01

    The modeling of the cochlear labyrinth in living subjects is hampered by insufficient resolution of available clinical imaging methods. These methods usually provide resolutions higher than 125 μm. This is too crude to record the position of basilar membrane and, as a result, keep apart even the scala tympani from other scalae. This problem could be avoided by the means of atlas-based segmentation. The specimens can endure higher radiation loads and, conversely, provide better-resolved images. The resulting surface can be used as the seed for atlas-based segmentation. To serve this purpose, we have developed a statistical shape model (SSM) of human scala tympani based on segmentations obtained from 10 μCT image stacks. After segmentation, we aligned the resulting surfaces using Procrustes alignment. This algorithm was slightly modified to accommodate single models with nodes which do not necessarily correspond to salient features and vary in number between models. We have established correspondence by mutual proximity between nodes. Rather than using the standard Euclidean norm, we have applied an alternative logarithmic norm to improve outlier treatment. The minimization was done using BFGS method. We have also split the surface nodes along an octree to reduce computation cost. Subsequently, we have performed the principal component analysis of the training set with Jacobi eigenvalue algorithm. We expect the resulting method to help acquiring not only better understanding in interindividual variations of cochlear anatomy, but also a step towards individual models for pre-operative diagnostics prior to cochlear implant insertions.

  14. Quantification and Statistical Analysis Methods for Vessel Wall Components from Stained Images with Masson's Trichrome

    PubMed Central

    Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco

    2016-01-01

    Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643

  15. Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.

    PubMed

    Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2010-11-01

    Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.

  16. Familiar units prevail over statistical cues in word segmentation.

    PubMed

    Poulin-Charronnat, Bénédicte; Perruchet, Pierre; Tillmann, Barbara; Peereman, Ronald

    2017-09-01

    In language acquisition research, the prevailing position is that listeners exploit statistical cues, in particular transitional probabilities between syllables, to discover words of a language. However, other cues are also involved in word discovery. Assessing the weight learners give to these different cues leads to a better understanding of the processes underlying speech segmentation. The present study evaluated whether adult learners preferentially used known units or statistical cues for segmenting continuous speech. Before the exposure phase, participants were familiarized with part-words of a three-word artificial language. This design allowed the dissociation of the influence of statistical cues and familiar units, with statistical cues favoring word segmentation and familiar units favoring (nonoptimal) part-word segmentation. In Experiment 1, performance in a two-alternative forced choice (2AFC) task between words and part-words revealed part-word segmentation (even though part-words were less cohesive in terms of transitional probabilities and less frequent than words). By contrast, an unfamiliarized group exhibited word segmentation, as usually observed in standard conditions. Experiment 2 used a syllable-detection task to remove the likely contamination of performance by memory and strategy effects in the 2AFC task. Overall, the results suggest that familiar units overrode statistical cues, ultimately questioning the need for computation mechanisms of transitional probabilities (TPs) in natural language speech segmentation.

  17. Using Statistical Process Control Charts to Study Stuttering Frequency Variability during a Single Day

    ERIC Educational Resources Information Center

    Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark; Menzies, Ross; Packman, Ann

    2013-01-01

    Purpose: Stuttering varies between and within speaking situations. In this study, the authors used statistical process control charts with 10 case studies to investigate variability of stuttering frequency. Method: Participants were 10 adults who stutter. The authors counted the percentage of syllables stuttered (%SS) for segments of their speech…

  18. Body Composition Assessment in Axial CT Images Using FEM-Based Automatic Segmentation of Skeletal Muscle.

    PubMed

    Popuri, Karteek; Cobzas, Dana; Esfandiari, Nina; Baracos, Vickie; Jägersand, Martin

    2016-02-01

    The proportions of muscle and fat tissues in the human body, referred to as body composition is a vital measurement for cancer patients. Body composition has been recently linked to patient survival and the onset/recurrence of several types of cancers in numerous cancer research studies. This paper introduces a fully automatic framework for the segmentation of muscle and fat tissues from CT images to estimate body composition. We developed a novel finite element method (FEM) deformable model that incorporates a priori shape information via a statistical deformation model (SDM) within the template-based segmentation framework. The proposed method was validated on 1000 abdominal and 530 thoracic CT images and we obtained very good segmentation results with Jaccard scores in excess of 90% for both the muscle and fat regions.

  19. Infants Segment Continuous Events Using Transitional Probabilities

    ERIC Educational Resources Information Center

    Stahl, Aimee E.; Romberg, Alexa R.; Roseberry, Sarah; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn

    2014-01-01

    Throughout their 1st year, infants adeptly detect statistical structure in their environment. However, little is known about whether statistical learning is a primary mechanism for event segmentation. This study directly tests whether statistical learning alone is sufficient to segment continuous events. Twenty-eight 7- to 9-month-old infants…

  20. Flexibility in Statistical Word Segmentation: Finding Words in Foreign Speech

    ERIC Educational Resources Information Center

    Graf Estes, Katharine; Gluck, Stephanie Chen-Wu; Bastos, Carolina

    2015-01-01

    The present experiments investigated the flexibility of statistical word segmentation. There is ample evidence that infants can use statistical cues (e.g., syllable transitional probabilities) to segment fluent speech. However, it is unclear how effectively infants track these patterns in unfamiliar phonological systems. We examined whether…

  1. Probabilistic Air Segmentation and Sparse Regression Estimated Pseudo CT for PET/MR Attenuation Correction

    PubMed Central

    Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David

    2015-01-01

    Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778

  2. Joint volumetric extraction and enhancement of vasculature from low-SNR 3-D fluorescence microscopy images.

    PubMed

    Almasi, Sepideh; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L; Xu, Xiaoyin

    2017-03-01

    To simultaneously overcome the challenges imposed by the nature of optical imaging characterized by a range of artifacts including space-varying signal to noise ratio (SNR), scattered light, and non-uniform illumination, we developed a novel method that segments the 3-D vasculature directly from original fluorescence microscopy images eliminating the need for employing pre- and post-processing steps such as noise removal and segmentation refinement as used with the majority of segmentation techniques. Our method comprises two initialization and constrained recovery and enhancement stages. The initialization approach is fully automated using features derived from bi-scale statistical measures and produces seed points robust to non-uniform illumination, low SNR, and local structural variations. This algorithm achieves the goal of segmentation via design of an iterative approach that extracts the structure through voting of feature vectors formed by distance, local intensity gradient, and median measures. Qualitative and quantitative analysis of the experimental results obtained from synthetic and real data prove the effcacy of this method in comparison to the state-of-the-art enhancing-segmenting methods. The algorithmic simplicity, freedom from having a priori probabilistic information about the noise, and structural definition gives this algorithm a wide potential range of applications where i.e. structural complexity significantly complicates the segmentation problem.

  3. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  4. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals.

    PubMed

    Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer

    2013-10-01

    The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time-frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  6. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  7. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  8. Epidermis area detection for immunofluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dovganich, Andrey; Krylov, Andrey; Nasonov, Andrey; Makhneva, Natalia

    2018-04-01

    We propose a novel image segmentation method for immunofluorescence microscopy images of skin tissue for the diagnosis of various skin diseases. The segmentation is based on machine learning algorithms. The feature vector is filled by three groups of features: statistical features, Laws' texture energy measures and local binary patterns. The images are preprocessed for better learning. Different machine learning algorithms have been used and the best results have been obtained with random forest algorithm. We use the proposed method to detect the epidermis region as a part of pemphigus diagnosis system.

  9. Discrete wavelet-aided delineation of PCG signal events via analysis of an area curve length-based decision statistic.

    PubMed

    Homaeinezhad, M R; Atyabi, S A; Daneshvar, E; Ghaffari, A; Tahmasebi, M

    2010-12-01

    The aim of this study is to describe a robust unified framework for segmentation of the phonocardiogram (PCG) signal sounds based on the false-alarm probability (FAP) bounded segmentation of a properly calculated detection measure. To this end, first the original PCG signal is appropriately pre-processed and then, a fixed sample size sliding window is moved on the pre-processed signal. In each slid, the area under the excerpted segment is multiplied by its curve-length to generate the Area Curve Length (ACL) metric to be used as the segmentation decision statistic (DS). Afterwards, histogram parameters of the nonlinearly enhanced DS metric are used for regulation of the α-level Neyman-Pearson classifier for FAP-bounded delineation of the PCG events. The proposed method was applied to all 85 records of Nursing Student Heart Sounds database (NSHSDB) including stenosis, insufficiency, regurgitation, gallop, septal defect, split sound, rumble, murmur, clicks, friction rub and snap disorders with different sampling frequencies. Also, the method was applied to the records obtained from an electronic stethoscope board designed for fulfillment of this study in the presence of high-level power-line noise and external disturbing sounds and as the results, no false positive (FP) or false negative (FN) errors were detected. High noise robustness, acceptable detection-segmentation accuracy of PCG events in various cardiac system conditions, and having no parameters dependency to the acquisition sampling frequency can be mentioned as the principal virtues and abilities of the proposed ACL-based PCG events detection-segmentation algorithm.

  10. A Character Level Based and Word Level Based Approach for Chinese-Vietnamese Machine Translation.

    PubMed

    Tran, Phuoc; Dinh, Dien; Nguyen, Hien T

    2016-01-01

    Chinese and Vietnamese have the same isolated language; that is, the words are not delimited by spaces. In machine translation, word segmentation is often done first when translating from Chinese or Vietnamese into different languages (typically English) and vice versa. However, it is a matter for consideration that words may or may not be segmented when translating between two languages in which spaces are not used between words, such as Chinese and Vietnamese. Since Chinese-Vietnamese is a low-resource language pair, the sparse data problem is evident in the translation system of this language pair. Therefore, while translating, whether it should be segmented or not becomes more important. In this paper, we propose a new method for translating Chinese to Vietnamese based on a combination of the advantages of character level and word level translation. In addition, a hybrid approach that combines statistics and rules is used to translate on the word level. And at the character level, a statistical translation is used. The experimental results showed that our method improved the performance of machine translation over that of character or word level translation.

  11. Automated scoring of regional lung perfusion in children from contrast enhanced 3D MRI

    NASA Astrophysics Data System (ADS)

    Heimann, Tobias; Eichinger, Monika; Bauman, Grzegorz; Bischoff, Arved; Puderbach, Michael; Meinzer, Hans-Peter

    2012-03-01

    MRI perfusion images give information about regional lung function and can be used to detect pulmonary pathologies in cystic fibrosis (CF) children. However, manual assessment of the percentage of pathologic tissue in defined lung subvolumes features large inter- and intra-observer variation, making it difficult to determine disease progression consistently. We present an automated method to calculate a regional score for this purpose. First, lungs are located based on thresholding and morphological operations. Second, statistical shape models of left and right children's lungs are initialized at the determined locations and used to precisely segment morphological images. Segmentation results are transferred to perfusion maps and employed as masks to calculate perfusion statistics. An automated threshold to determine pathologic tissue is calculated and used to determine accurate regional scores. We evaluated the method on 10 MRI images and achieved an average surface distance of less than 1.5 mm compared to manual reference segmentations. Pathologic tissue was detected correctly in 9 cases. The approach seems suitable for detecting early signs of CF and monitoring response to therapy.

  12. Comparative study on the performance of textural image features for active contour segmentation.

    PubMed

    Moraru, Luminita; Moldovanu, Simona

    2012-07-01

    We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.

  13. GLISTR: Glioma Image Segmentation and Registration

    PubMed Central

    Pohl, Kilian M.; Bilello, Michel; Cirillo, Luigi; Biros, George; Melhem, Elias R.; Davatzikos, Christos

    2015-01-01

    We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient’s images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space. PMID:22907965

  14. Assessing the Robustness of Complete Bacterial Genome Segmentations

    NASA Astrophysics Data System (ADS)

    Devillers, Hugo; Chiapello, Hélène; Schbath, Sophie; El Karoui, Meriem

    Comparison of closely related bacterial genomes has revealed the presence of highly conserved sequences forming a "backbone" that is interrupted by numerous, less conserved, DNA fragments. Segmentation of bacterial genomes into backbone and variable regions is particularly useful to investigate bacterial genome evolution. Several software tools have been designed to compare complete bacterial chromosomes and a few online databases store pre-computed genome comparisons. However, very few statistical methods are available to evaluate the reliability of these software tools and to compare the results obtained with them. To fill this gap, we have developed two local scores to measure the robustness of bacterial genome segmentations. Our method uses a simulation procedure based on random perturbations of the compared genomes. The scores presented in this paper are simple to implement and our results show that they allow to discriminate easily between robust and non-robust bacterial genome segmentations when using aligners such as MAUVE and MGA.

  15. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  16. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    PubMed

    Memari, Nogol; Ramli, Abd Rahman; Bin Saripan, M Iqbal; Mashohor, Syamsiah; Moghbel, Mehrdad

    2017-01-01

    The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE) method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of the Retina (STARE) and Child Heart and Health Study in England (CHASE_DB1) datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  17. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  18. Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).

    PubMed

    Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie

    2017-01-01

    This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.

  19. Right ventricle functional parameters estimation in arrhythmogenic right ventricular dysplasia using a robust shape based deformable model.

    PubMed

    Oghli, Mostafa Ghelich; Dehlaghi, Vahab; Zadeh, Ali Mohammad; Fallahi, Alireza; Pooyan, Mohammad

    2014-07-01

    Assessment of cardiac right-ventricle functions plays an essential role in diagnosis of arrhythmogenic right ventricular dysplasia (ARVD). Among clinical tests, cardiac magnetic resonance imaging (MRI) is now becoming the most valid imaging technique to diagnose ARVD. Fatty infiltration of the right ventricular free wall can be visible on cardiac MRI. Finding right-ventricle functional parameters from cardiac MRI images contains segmentation of right-ventricle in each slice of end diastole and end systole phases of cardiac cycle and calculation of end diastolic and end systolic volume and furthermore other functional parameters. The main problem of this task is the segmentation part. We used a robust method based on deformable model that uses shape information for segmentation of right-ventricle in short axis MRI images. After segmentation of right-ventricle from base to apex in end diastole and end systole phases of cardiac cycle, volume of right-ventricle in these phases calculated and then, ejection fraction calculated. We performed a quantitative evaluation of clinical cardiac parameters derived from the automatic segmentation by comparison against a manual delineation of the ventricles. The manually and automatically determined quantitative clinical parameters were statistically compared by means of linear regression. This fits a line to the data such that the root-mean-square error (RMSE) of the residuals is minimized. The results show low RMSE for Right Ventricle Ejection Fraction and Volume (≤ 0.06 for RV EF, and ≤ 10 mL for RV volume). Evaluation of segmentation results is also done by means of four statistical measures including sensitivity, specificity, similarity index and Jaccard index. The average value of similarity index is 86.87%. The Jaccard index mean value is 83.85% which shows a good accuracy of segmentation. The average of sensitivity is 93.9% and mean value of the specificity is 89.45%. These results show the reliability of proposed method in these cases that manual segmentation is inapplicable. Huge shape variety of right-ventricle led us to use a shape prior based method and this work can develop by four-dimensional processing for determining the first ventricular slices.

  20. Body Segment Differences in Surface Area, Skin Temperature and 3D Displacement and the Estimation of Heat Balance during Locomotion in Hominins

    PubMed Central

    Cross, Alan; Collard, Mark; Nelson, Andrew

    2008-01-01

    The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached. PMID:18560580

  1. Body segment differences in surface area, skin temperature and 3D displacement and the estimation of heat balance during locomotion in hominins.

    PubMed

    Cross, Alan; Collard, Mark; Nelson, Andrew

    2008-06-18

    The conventional method of estimating heat balance during locomotion in humans and other hominins treats the body as an undifferentiated mass. This is problematic because the segments of the body differ with respect to several variables that can affect thermoregulation. Here, we report a study that investigated the impact on heat balance during locomotion of inter-segment differences in three of these variables: surface area, skin temperature and rate of movement. The approach adopted in the study was to generate heat balance estimates with the conventional method and then compare them with heat balance estimates generated with a method that takes into account inter-segment differences in surface area, skin temperature and rate of movement. We reasoned that, if the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement affect heat balance during locomotion is correct, the estimates yielded by the two methods should be statistically significantly different. Anthropometric data were collected on seven adult male volunteers. The volunteers then walked on a treadmill at 1.2 m/s while 3D motion capture cameras recorded their movements. Next, the conventional and segmented methods were used to estimate the volunteers' heat balance while walking in four ambient temperatures. Lastly, the estimates produced with the two methods were compared with the paired t-test. The estimates of heat balance during locomotion yielded by the two methods are significantly different. Those yielded by the segmented method are significantly lower than those produced by the conventional method. Accordingly, the study supports the hypothesis that inter-segment differences in surface area, skin temperature and rate of movement impact heat balance during locomotion. This has important implications not only for current understanding of heat balance during locomotion in hominins but also for how future research on this topic should be approached.

  2. Fast and robust brain tumor segmentation using level set method with multiple image information.

    PubMed

    Lok, Ka Hei; Shi, Lin; Zhu, Xianlun; Wang, Defeng

    2017-01-01

    Brain tumor segmentation is a challenging task for its variation in intensity. The phenomenon is caused by the inhomogeneous content of tumor tissue and the choice of imaging modality. In 2010 Zhang developed the Selective Binary Gaussian Filtering Regularizing Level Set (SBGFRLS) model that combined the merits of edge-based and region-based segmentation. To improve the SBGFRLS method by modifying the singed pressure force (SPF) term with multiple image information and demonstrate effectiveness of proposed method on clinical images. In original SBGFRLS model, the contour evolution direction mainly depends on the SPF. By introducing a directional term in SPF, the metric could control the evolution direction. The SPF is altered by statistic values enclosed by the contour. This concept can be extended to jointly incorporate multiple image information. The new SPF term is expected to bring a solution for blur edge problem in brain tumor segmentation. The proposed method is validated with clinical images including pre- and post-contrast magnetic resonance images. The accuracy and robustness is compared with sensitivity, specificity, DICE similarity coefficient and Jaccard similarity index. Experimental results show improvement, in particular the increase of sensitivity at the same specificity, in segmenting all types of tumors except for the diffused tumor. The novel brain tumor segmentation method is clinical-oriented with fast, robust and accurate implementation and a minimal user interaction. The method effectively segmented homogeneously enhanced, non-enhanced, heterogeneously-enhanced, and ring-enhanced tumor under MR imaging. Though the method is limited by identifying edema and diffuse tumor, several possible solutions are suggested to turn the curve evolution into a fully functional clinical diagnosis tool.

  3. Co-occurrence statistics as a language-dependent cue for speech segmentation.

    PubMed

    Saksida, Amanda; Langus, Alan; Nespor, Marina

    2017-05-01

    To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language-specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross-linguistic viability of different statistical learning strategies by analyzing child-directed speech corpora from nine languages and by modeling possible statistics-based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non-statistical cues when they begin their process of speech segmentation. © 2016 John Wiley & Sons Ltd.

  4. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  5. Nucleus detection using gradient orientation information and linear least squares regression

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

    2015-03-01

    Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

  6. Image segmentation with a novel regularized composite shape prior based on surrogate study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less

  7. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. FreeSurfer-initiated fully-automated subcortical brain segmentation in MRI using Large Deformation Diffeomorphic Metric Mapping.

    PubMed

    Khan, Ali R; Wang, Lei; Beg, Mirza Faisal

    2008-07-01

    Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.

  9. TU-H-CAMPUS-JeP2-05: Can Automatic Delineation of Cardiac Substructures On Noncontrast CT Be Used for Cardiac Toxicity Analysis?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y; Liao, Z; Jiang, W

    Purpose: To evaluate the feasibility of using an automatic segmentation tool to delineate cardiac substructures from computed tomography (CT) images for cardiac toxicity analysis for non-small cell lung cancer (NSCLC) patients after radiotherapy. Methods: A multi-atlas segmentation tool developed in-house was used to delineate eleven cardiac substructures including the whole heart, four heart chambers, and six greater vessels automatically from the averaged 4DCT planning images for 49 NSCLC patients. The automatic segmented contours were edited appropriately by two experienced radiation oncologists. The modified contours were compared with the auto-segmented contours using Dice similarity coefficient (DSC) and mean surface distance (MSD)more » to evaluate how much modification was needed. In addition, the dose volume histogram (DVH) of the modified contours were compared with that of the auto-segmented contours to evaluate the dosimetric difference between modified and auto-segmented contours. Results: Of the eleven structures, the averaged DSC values ranged from 0.73 ± 0.08 to 0.95 ± 0.04 and the averaged MSD values ranged from 1.3 ± 0.6 mm to 2.9 ± 5.1mm for the 49 patients. Overall, the modification is small. The pulmonary vein (PV) and the inferior vena cava required the most modifications. The V30 (volume receiving 30 Gy or above) for the whole heart and the mean dose to the whole heart and four heart chambers did not show statistically significant difference between modified and auto-segmented contours. The maximum dose to the greater vessels did not show statistically significant difference except for the PV. Conclusion: The automatic segmentation of the cardiac substructures did not require substantial modification. The dosimetric evaluation showed no statistically significant difference between auto-segmented and modified contours except for the PV, which suggests that auto-segmented contours for the cardiac dose response study are feasible in the clinical practice with a minor modification to the PV vessel.« less

  10. Morphological patterns of lip prints in Mangaloreans based on Suzuki and Tsuchihashi classification

    PubMed Central

    Jeergal, Prabhakar A; Pandit, Siddharth; Desai, Dinkar; Surekha, R; Jeergal, Vasanti A

    2016-01-01

    Introduction: Cheiloscopy is the study of the furrows or grooves present on the red part or vermilion border of the human lips. The present study aims to classify the characteristics of lip prints and to know the most common morphological pattern specific to Mangalorean people of Southern India. For the first time, this study also assesses the association between gender and different lip segments within a population. Materials and Methods: A total of 200 residents of Mangalore (100 males and 100 females) were included of age ranging from 18 years to 60 years. Materials used to take the impression of lips included red lipstick, A4 size white bond paper and cellophane tape. The prints obtained were scanned using a Canon Image Scanner and stored in a folder on a personal computer. The images were cropped and inverted in gray scale using Adobe Photoshop software. Each lip print was divided into eight segments and was examined. Suzuki and Tsuchihashi's classification (1970) was used to classify the types of grooves, and the results were statistically analyzed. Six types of grooves were recorded in the Mangalorean's lips. Statistical Analysis: Association between gender and different lip segments was tested using Chi-square analysis in the given population. Results: In males, the groove Type I' was the highest recorded followed by Type III, Type II, Type I, Type IV and Type V in descending order. In females, Type I' was the highest recorded followed by Type II, Type III, Type IV, Type I and Type V in descending order. Conclusion: Males and females displayed statistically significant differences in lip print patterns for different lip sites: lower medial lip, as well as upper and lower lateral segments. Only the upper medial lip segment displayed no statistically significant difference in lip print pattern between males and females. This shows that the distribution of lip prints is generally dissimilar for males and females, with varying predominance according to lip segment. PMID:27601831

  11. Survey statistics of automated segmentations applied to optical imaging of mammalian cells.

    PubMed

    Bajcsy, Peter; Cardone, Antonio; Chalfoun, Joe; Halter, Michael; Juba, Derek; Kociolek, Marcin; Majurski, Michael; Peskin, Adele; Simon, Carl; Simon, Mylene; Vandecreme, Antoine; Brady, Mary

    2015-10-15

    The goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements. We define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories. The survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue. The novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.

  12. GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation.

    PubMed

    Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos

    2016-01-01

    We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.

  13. Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Fei, Baowei

    2013-11-01

    An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  14. Prostate segmentation in MRI using fused T2-weighted and elastography images

    NASA Astrophysics Data System (ADS)

    Nir, Guy; Sahebjavaher, Ramin S.; Baghani, Ali; Sinkus, Ralph; Salcudean, Septimiu E.

    2014-03-01

    Segmentation of the prostate in medical imaging is a challenging and important task for surgical planning and delivery of prostate cancer treatment. Automatic prostate segmentation can improve speed, reproducibility and consistency of the process. In this work, we propose a method for automatic segmentation of the prostate in magnetic resonance elastography (MRE) images. The method utilizes the complementary property of the elastogram and the corresponding T2-weighted image, which are obtained from the phase and magnitude components of the imaging signal, respectively. It follows a variational approach to propagate an active contour model based on the combination of region statistics in the elastogram and the edge map of the T2-weighted image. The method is fast and does not require prior shape information. The proposed algorithm is tested on 35 clinical image pairs from five MRE data sets, and is evaluated in comparison with manual contouring. The mean absolute distance between the automatic and manual contours is 1.8mm, with a maximum distance of 5.6mm. The relative area error is 7.6%, and the duration of the segmentation process is 2s per slice.

  15. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets

    PubMed Central

    Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.

    2016-01-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  16. Brain tumor detection and segmentation in a CRF (conditional random fields) framework with pixel-pairwise affinity and superpixel-level features.

    PubMed

    Wu, Wei; Chen, Albert Y C; Zhao, Liang; Corso, Jason J

    2014-03-01

    Detection and segmentation of a brain tumor such as glioblastoma multiforme (GBM) in magnetic resonance (MR) images are often challenging due to its intrinsically heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI scans was developed and tested. Simple thresholds and statistical methods are unable to adequately segment the various elements of the GBM, such as local contrast enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory results in larger data sets, and the methods based on generative or discriminative models have intrinsic limitations during application, such as small sample set learning and transfer. A new method was developed to overcome these challenges. Multimodal MR images are segmented into superpixels using algorithms to alleviate the sampling issue and to improve the sample representativeness. Next, features were extracted from the superpixels using multi-level Gabor wavelet filters. Based on the features, a support vector machine (SVM) model and an affinity metric model for tumors were trained to overcome the limitations of previous generative models. Based on the output of the SVM and spatial affinity models, conditional random fields theory was applied to segment the tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity model. Finally, labeling noise was removed using "structural knowledge" such as the symmetrical and continuous characteristics of the tumor in spatial domain. The system was evaluated with 20 GBM cases and the BraTS challenge data set. Dice coefficients were computed, and the results were highly consistent with those reported by Zikic et al. (MICCAI 2012, Lecture notes in computer science. vol 7512, pp 369-376, 2012). A brain tumor segmentation method using model-aware affinity demonstrates comparable performance with other state-of-the art algorithms.

  17. A novel automatic segmentation workflow of axial breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra

    2018-04-01

    In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.

  18. Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui

    2018-05-01

    In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.

  19. A computerized MRI biomarker quantification scheme for a canine model of Duchenne muscular dystrophy

    PubMed Central

    Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N.; Styner, Martin A.

    2015-01-01

    Purpose Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semi-automated system to quantify MRI biomarkers of GRMD. Methods Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using two competing schemes: 1) standard, limited muscle range segmentation and 2) semi-automatic full muscle segmentation. We then performed pre-processing, including: intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted (T2w) and T2-weighted fat suppressed (T2fs) images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume and intensity statistics over MRI biomarker maps, and statistical image texture features. Results The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation shows significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. Conclusion The experimental results demonstrated that this quantification tool can reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients. PMID:23299128

  20. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods aremore » applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross-validation methodology. A previously developed B-spline active rays segmentation method was also considered for comparison purposes. Results: Interobserver and intraobserver segmentation agreements (median and [25%, 75%] quartile range) were substantial with respect to the distance metrics HDIST{sub cluster} (2.3 [1.8, 2.9] and 2.5 [2.1, 3.2] pixels) and AMINDIST{sub cluster} (0.8 [0.6, 1.0] and 1.0 [0.8, 1.2] pixels), while moderate with respect to AOM{sub cluster} (0.64 [0.55, 0.71] and 0.59 [0.52, 0.66]). The proposed segmentation method outperformed (0.80 ± 0.04) statistically significantly (Mann-Whitney U-test, p < 0.05) the B-spline active rays segmentation method (0.69 ± 0.04), suggesting the significance of the proposed semiautomated method. Conclusions: Results indicate a reliable semiautomated segmentation method for MC clusters offered by deformable models, which could be utilized in MC cluster quantitative image analysis.« less

  1. Historical Data Analysis of Hospital Discharges Related to the Amerithrax Attack in Florida

    PubMed Central

    Burke, Lauralyn K.; Brown, C. Perry; Johnson, Tammie M.

    2016-01-01

    Interrupted time-series analysis (ITSA) can be used to identify, quantify, and evaluate the magnitude and direction of an event on the basis of time-series data. This study evaluates the impact of the bioterrorist anthrax attacks (“Amerithrax”) on hospital inpatient discharges in the metropolitan statistical area of Palm Beach, Broward, and Miami-Dade counties in the fourth quarter of 2001. Three statistical methods—standardized incidence ratio (SIR), segmented regression, and an autoregressive integrated moving average (ARIMA)—were used to determine whether Amerithrax influenced inpatient utilization. The SIR found a non–statistically significant 2 percent decrease in hospital discharges. Although the segmented regression test found a slight increase in the discharge rate during the fourth quarter, it was also not statistically significant; therefore, it could not be attributed to Amerithrax. Segmented regression diagnostics preparing for ARIMA indicated that the quarterly data time frame was not serially correlated and violated one of the assumptions for the use of the ARIMA method and therefore could not properly evaluate the impact on the time-series data. Lack of data granularity of the time frames hindered the successful evaluation of the impact by the three analytic methods. This study demonstrates that the granularity of the data points is as important as the number of data points in a time series. ITSA is important for the ability to evaluate the impact that any hazard may have on inpatient utilization. Knowledge of hospital utilization patterns during disasters offer healthcare and civic professionals valuable information to plan, respond, mitigate, and evaluate any outcomes stemming from biothreats. PMID:27843420

  2. Building Roof Segmentation from Aerial Images Using a Line-and Region-Based Watershed Segmentation Technique

    PubMed Central

    Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja

    2015-01-01

    In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706

  3. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    PubMed

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  5. Laser speckle imaging for lesion detection on tooth

    NASA Astrophysics Data System (ADS)

    Gavinho, Luciano G.; Silva, João. V. P.; Damazio, João. H.; Sfalcin, Ravana A.; Araujo, Sidnei A.; Pinto, Marcelo M.; Olivan, Silvia R. G.; Prates, Renato A.; Bussadori, Sandra K.; Deana, Alessandro M.

    2018-02-01

    Computer vision technologies for diagnostic imaging applied to oral lesions, specifically, carious lesions of the teeth, are in their early years of development. The relevance of this public problem, dental caries, worries countries around the world, as it affects almost the entire population, at least once in the life of each individual. The present work demonstrates current techniques for obtaining information about lesions on teeth by segmentation laser speckle imagens (LSI). Laser speckle image results from laser light reflection on a rough surface, and it was considered a noise but has important features that carry information about the illuminated surface. Even though these are basic images, only a few works have analyzed it by application of computer vision methods. In this article, we present the latest results of our group, in which Computer vision techniques were adapted to segment laser speckle images for diagnostic purposes. These methods are applied to the segmentation of images between healthy and lesioned regions of the tooth. These methods have proven to be effective in the diagnosis of early-stage lesions, often imperceptible in traditional diagnostic methods in the clinical practice. The first method uses first-order statistical models, segmenting the image by comparing the mean and standard deviation of the intensity of the pixels. The second method is based on the distance of the chi-square (χ2 ) between the histograms of the image, bringing a significant improvement in the precision of the diagnosis, while a third method introduces the use of fractal geometry, exposing, through of the fractal dimension, more precisely the difference between lesioned areas and healthy areas of a tooth compared to other methods of segmentation. So far, we can observe efficiency in the segmentation of the carious regions. A software was developed for the execution and demonstration of the applicability of the models

  6. Performance comparison of deep learning and segmentation-based radiomic methods in the task of distinguishing benign and malignant breast lesions on DCE-MRI

    NASA Astrophysics Data System (ADS)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2017-03-01

    Intuitive segmentation-based CADx/radiomic features, calculated from the lesion segmentations of dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) have been utilized in the task of distinguishing between malignant and benign lesions. Additionally, transfer learning with pre-trained deep convolutional neural networks (CNNs) allows for an alternative method of radiomics extraction, where the features are derived directly from the image data. However, the comparison of computer-extracted segmentation-based and CNN features in MRI breast lesion characterization has not yet been conducted. In our study, we used a DCE-MRI database of 640 breast cases - 191 benign and 449 malignant. Thirty-eight segmentation-based features were extracted automatically using our quantitative radiomics workstation. Also, 2D ROIs were selected around each lesion on the DCE-MRIs and directly input into a pre-trained CNN AlexNet, yielding CNN features. Each method was investigated separately and in combination in terms of performance in the task of distinguishing between benign and malignant lesions. Area under the ROC curve (AUC) served as the figure of merit. Both methods yielded promising classification performance with round-robin cross-validated AUC values of 0.88 (se =0.01) and 0.76 (se=0.02) for segmentationbased and deep learning methods, respectively. Combination of the two methods enhanced the performance in malignancy assessment resulting in an AUC value of 0.91 (se=0.01), a statistically significant improvement over the performance of the CNN method alone.

  7. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Segmentation of fluorescence microscopy images for quantitative analysis of cell nuclear architecture.

    PubMed

    Russell, Richard A; Adams, Niall M; Stephens, David A; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S

    2009-04-22

    Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments.

  9. Segmentation of Fluorescence Microscopy Images for Quantitative Analysis of Cell Nuclear Architecture

    PubMed Central

    Russell, Richard A.; Adams, Niall M.; Stephens, David A.; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S.

    2009-01-01

    Abstract Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments. PMID:19383481

  10. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  11. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  12. Using data mining to segment healthcare markets from patients' preference perspectives.

    PubMed

    Liu, Sandra S; Chen, Jie

    2009-01-01

    This paper aims to provide an example of how to use data mining techniques to identify patient segments regarding preferences for healthcare attributes and their demographic characteristics. Data were derived from a number of individuals who received in-patient care at a health network in 2006. Data mining and conventional hierarchical clustering with average linkage and Pearson correlation procedures are employed and compared to show how each procedure best determines segmentation variables. Data mining tools identified three differentiable segments by means of cluster analysis. These three clusters have significantly different demographic profiles. The study reveals, when compared with traditional statistical methods, that data mining provides an efficient and effective tool for market segmentation. When there are numerous cluster variables involved, researchers and practitioners need to incorporate factor analysis for reducing variables to clearly and meaningfully understand clusters. Interests and applications in data mining are increasing in many businesses. However, this technology is seldom applied to healthcare customer experience management. The paper shows that efficient and effective application of data mining methods can aid the understanding of patient healthcare preferences.

  13. Segmentation of the Aortic Valve Apparatus in 3D Echocardiographic Images: Deformable Modeling of a Branching Medial Structure

    PubMed Central

    Pouch, Alison M.; Tian, Sijie; Takabe, Manabu; Wang, Hongzhi; Yuan, Jiefu; Cheung, Albert T.; Jackson, Benjamin M.; Gorman, Joseph H.; Gorman, Robert C.; Yushkevich, Paul A.

    2015-01-01

    3D echocardiographic (3DE) imaging is a useful tool for assessing the complex geometry of the aortic valve apparatus. Segmentation of this structure in 3DE images is a challenging task that benefits from shape-guided deformable modeling methods, which enable inter-subject statistical shape comparison. Prior work demonstrates the efficacy of using continuous medial representation (cm-rep) as a shape descriptor for valve leaflets. However, its application to the entire aortic valve apparatus is limited since the structure has a branching medial geometry that cannot be explicitly parameterized in the original cm-rep framework. In this work, we show that the aortic valve apparatus can be accurately segmented using a new branching medial modeling paradigm. The segmentation method achieves a mean boundary displacement of 0.6 ± 0.1 mm (approximately one voxel) relative to manual segmentation on 11 3DE images of normal open aortic valves. This study demonstrates a promising approach for quantitative 3DE analysis of aortic valve morphology. PMID:26247062

  14. Neural and Decision Theoretic Approaches for the Automated Segmentation of Radiodense Tissue in Digitized Mammograms

    NASA Astrophysics Data System (ADS)

    Eckert, R.; Neyhart, J. T.; Burd, L.; Polikar, R.; Mandayam, S. A.; Tseng, M.

    2003-03-01

    Mammography is the best method available as a non-invasive technique for the early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions due to fat and radiodense (light) regions due to connective and epithelial tissue. The amount of radiodense tissue can be used as a marker for predicting breast cancer risk. Previously, we have shown that the use of statistical models is a reliable technique for segmenting radiodense tissue. This paper presents improvements in the model that allow for further development of an automated system for segmentation of radiodense tissue. The segmentation algorithm employs a two-step process. In the first step, segmentation of tissue and non-tissue regions of a digitized X-ray mammogram image are identified using a radial basis function neural network. The second step uses a constrained Neyman-Pearson algorithm, developed especially for this research work, to determine the amount of radiodense tissue. Results obtained using the algorithm have been validated by comparing with estimates provided by a radiologist employing previously established methods.

  15. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  16. A Character Level Based and Word Level Based Approach for Chinese-Vietnamese Machine Translation

    PubMed Central

    2016-01-01

    Chinese and Vietnamese have the same isolated language; that is, the words are not delimited by spaces. In machine translation, word segmentation is often done first when translating from Chinese or Vietnamese into different languages (typically English) and vice versa. However, it is a matter for consideration that words may or may not be segmented when translating between two languages in which spaces are not used between words, such as Chinese and Vietnamese. Since Chinese-Vietnamese is a low-resource language pair, the sparse data problem is evident in the translation system of this language pair. Therefore, while translating, whether it should be segmented or not becomes more important. In this paper, we propose a new method for translating Chinese to Vietnamese based on a combination of the advantages of character level and word level translation. In addition, a hybrid approach that combines statistics and rules is used to translate on the word level. And at the character level, a statistical translation is used. The experimental results showed that our method improved the performance of machine translation over that of character or word level translation. PMID:27446207

  17. Speech segmentation in aphasia

    PubMed Central

    Peñaloza, Claudia; Benetello, Annalisa; Tuomiranta, Leena; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria Carmen; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Background Speech segmentation is one of the initial and mandatory phases of language learning. Although some people with aphasia have shown a preserved ability to learn novel words, their speech segmentation abilities have not been explored. Aims We examined the ability of individuals with chronic aphasia to segment words from running speech via statistical learning. We also explored the relationships between speech segmentation and aphasia severity, and short-term memory capacity. We further examined the role of lesion location in speech segmentation and short-term memory performance. Methods & Procedures The experimental task was first validated with a group of young adults (n = 120). Participants with chronic aphasia (n = 14) were exposed to an artificial language and were evaluated in their ability to segment words using a speech segmentation test. Their performance was contrasted against chance level and compared to that of a group of elderly matched controls (n = 14) using group and case-by-case analyses. Outcomes & Results As a group, participants with aphasia were significantly above chance level in their ability to segment words from the novel language and did not significantly differ from the group of elderly controls. Speech segmentation ability in the aphasic participants was not associated with aphasia severity although it significantly correlated with word pointing span, a measure of verbal short-term memory. Case-by-case analyses identified four individuals with aphasia who performed above chance level on the speech segmentation task, all with predominantly posterior lesions and mild fluent aphasia. Their short-term memory capacity was also better preserved than in the rest of the group. Conclusions Our findings indicate that speech segmentation via statistical learning can remain functional in people with chronic aphasia and suggest that this initial language learning mechanism is associated with the functionality of the verbal short-term memory system and the integrity of the left inferior frontal region. PMID:28824218

  18. Automatic segmentation of the facial nerve and chorda tympani using image registration and statistical priors

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit M.

    2008-03-01

    In cochlear implant surgery, an electrode array is permanently implanted in the cochlea to stimulate the auditory nerve and allow deaf people to hear. A minimally invasive surgical technique has recently been proposed--percutaneous cochlear access--in which a single hole is drilled from the skull surface to the cochlea. For the method to be feasible, a safe and effective drilling trajectory must be determined using a pre-operative CT. Segmentation of the structures of the ear would improve trajectory planning safety and efficiency and enable the possibility of automated planning. Two important structures of the ear, the facial nerve and chorda tympani, present difficulties in intensity based segmentation due to their diameter (as small as 1.0 and 0.4 mm) and adjacent inter-patient variable structures of similar intensity in CT imagery. A multipart, model-based segmentation algorithm is presented in this paper that accomplishes automatic segmentation of the facial nerve and chorda tympani. Segmentation results are presented for 14 test ears and are compared to manually segmented surfaces. The results show that mean error in structure wall localization is 0.2 and 0.3 mm for the facial nerve and chorda, proving the method we propose is robust and accurate.

  19. Automatic segmentation of right ventricle on ultrasound images using sparse matrix transform and level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei

    2013-03-01

    An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  20. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm.

    PubMed

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell's image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method.

  1. Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans.

    PubMed

    Reda, Fitsum A; Noble, Jack H; Rivas, Alejandro; McRackan, Theodore R; Labadie, Robert F; Dawant, Benoit M

    2011-10-01

    Cochlear implant surgery is used to implant an electrode array in the cochlea to treat hearing loss. The authors recently introduced a minimally invasive image-guided technique termed percutaneous cochlear implantation. This approach achieves access to the cochlea by drilling a single linear channel from the outer skull into the cochlea via the facial recess, a region bounded by the facial nerve and chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The goal of this work is to automatically segment the facial nerve and chorda tympani in pediatric CT scans. The authors have proposed an automatic technique to achieve the segmentation task in adult patients that relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work, the authors attempted to use the same method to segment the structures in pediatric scans. However, the authors learned that substantial differences exist between the anatomy of children and that of adults, which led to poor segmentation results when an adult model is used to segment a pediatric volume. Therefore, the authors built a new model for pediatric cases and used it to segment pediatric scans. Once this new model was built, the authors employed the same segmentation method used for adults with algorithm parameters that were optimized for pediatric anatomy. A validation experiment was conducted on 10 CT scans in which manually segmented structures were compared to automatically segmented structures. The mean, standard deviation, median, and maximum segmentation errors were 0.23, 0.17, 0.18, and 1.27 mm, respectively. The results indicate that accurate segmentation of the facial nerve and chorda tympani in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.

  2. Perception-based road hazard identification with Internet support.

    PubMed

    Tarko, Andrew P; DeSalle, Brian R

    2003-01-01

    One of the most important tasks faced by highway agencies is identifying road hazards. Agencies use crash statistics to detect road intersections and segments where the frequency of crashes is excessive. With the crash-based method, a dangerous intersection or segment can be pointed out only after a sufficient number of crashes occur. A more proactive method is needed, and motorist complaints may be able to assist agencies in detecting road hazards before crashes occur. This paper investigates the quality of safety information reported by motorists and the effectiveness of hazard identification based on motorist reports, which were collected with an experimental Internet website. It demonstrates that the intersections pointed out by motorists tended to have more crashes than other intersections. The safety information collected through the website was comparable to 2-3 months of crash data. It was concluded that although the Internet-based method could not substitute for the traditional crash-based methods, its joint use with crash statistics might be useful in detecting new hazards where crash data had been collected for a short time.

  3. PyClone: statistical inference of clonal population structure in cancer.

    PubMed

    Roth, Andrew; Khattra, Jaswinder; Yap, Damian; Wan, Adrian; Laks, Emma; Biele, Justina; Ha, Gavin; Aparicio, Samuel; Bouchard-Côté, Alexandre; Shah, Sohrab P

    2014-04-01

    We introduce PyClone, a statistical model for inference of clonal population structures in cancers. PyClone is a Bayesian clustering method for grouping sets of deeply sequenced somatic mutations into putative clonal clusters while estimating their cellular prevalences and accounting for allelic imbalances introduced by segmental copy-number changes and normal-cell contamination. Single-cell sequencing validation demonstrates PyClone's accuracy.

  4. [Application of a mathematical algorithm for the detection of electroneuromyographic results in the pathogenesis study of facial dyskinesia].

    PubMed

    Gribova, N P; Iudel'son, Ia B; Golubev, V L; Abramenkova, I V

    2003-01-01

    To carry out a differential diagnosis of two facial dyskinesia (FD) models--facial hemispasm (FH) and facial paraspasm (FP), a combined program of electroneuromyographic (ENMG) examination has been created, using statistical analyses, including that for objects identification based on hybrid neural network with the application of adaptive fuzzy logic method and standard statistics programs (Wilcoxon, Student statistics). In FH, a lesion of peripheral facial neuromotor apparatus with augmentation of functions of inter-neurons in segmental and upper segmental stem levels predominated. In FP, primary afferent strengthening in mimic muscles was accompanied by increased motor neurons activity and reciprocal augmentation of inter-neurons, inhibiting motor portion of V pair. Mathematical algorithm for ENMG results recognition worked out in the study provides a precise differentiation of two FD models and opens possibilities for differential diagnosis of other facial motor disorders.

  5. Reproducibility of Lobar Perfusion and Ventilation Quantification Using SPECT/CT Segmentation Software in Lung Cancer Patients.

    PubMed

    Provost, Karine; Leblond, Antoine; Gauthier-Lemire, Annie; Filion, Édith; Bahig, Houda; Lord, Martin

    2017-09-01

    Planar perfusion scintigraphy with 99m Tc-labeled macroaggregated albumin is often used for pretherapy quantification of regional lung perfusion in lung cancer patients, particularly those with poor respiratory function. However, subdividing lung parenchyma into rectangular regions of interest, as done on planar images, is a poor reflection of true lobar anatomy. New tridimensional methods using SPECT and SPECT/CT have been introduced, including semiautomatic lung segmentation software. The present study evaluated inter- and intraobserver agreement on quantification using SPECT/CT software and compared the results for regional lung contribution obtained with SPECT/CT and planar scintigraphy. Methods: Thirty lung cancer patients underwent ventilation-perfusion scintigraphy with 99m Tc-macroaggregated albumin and 99m Tc-Technegas. The regional lung contribution to perfusion and ventilation was measured on both planar scintigraphy and SPECT/CT using semiautomatic lung segmentation software by 2 observers. Interobserver and intraobserver agreement for the SPECT/CT software was assessed using the intraclass correlation coefficient, Bland-Altman plots, and absolute differences in measurements. Measurements from planar and tridimensional methods were compared using the paired-sample t test and mean absolute differences. Results: Intraclass correlation coefficients were in the excellent range (above 0.9) for both interobserver and intraobserver agreement using the SPECT/CT software. Bland-Altman analyses showed very narrow limits of agreement. Absolute differences were below 2.0% in 96% of both interobserver and intraobserver measurements. There was a statistically significant difference between planar and SPECT/CT methods ( P < 0.001) for quantification of perfusion and ventilation for all right lung lobes, with a maximal mean absolute difference of 20.7% for the right middle lobe. There was no statistically significant difference in quantification of perfusion and ventilation for the left lung lobes using either method; however, absolute differences reached 12.0%. The total right and left lung contributions were similar for the two methods, with a mean difference of 1.2% for perfusion and 2.0% for ventilation. Conclusion: Quantification of regional lung perfusion and ventilation using SPECT/CT-based lung segmentation software is highly reproducible. This tridimensional method yields statistically significant differences in measurements for right lung lobes when compared with planar scintigraphy. We recommend that SPECT/CT-based quantification be used for all lung cancer patients undergoing pretherapy evaluation of regional lung function. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  6. A computerized MRI biomarker quantification scheme for a canine model of Duchenne muscular dystrophy.

    PubMed

    Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N; Styner, Martin A

    2013-09-01

    Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semiautomated system to quantify MRI biomarkers of GRMD. Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using a semiautomated full muscle segmentation method. We then performed preprocessing, including intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted and T2-weighted fat-suppressed images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume, intensity statistics over MRI biomarker maps, and statistical image texture features. The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation showed significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. The experimental results demonstrated that this quantification tool could reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients.

  7. A shape prior-based MRF model for 3D masseter muscle segmentation

    NASA Astrophysics Data System (ADS)

    Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe

    2012-02-01

    Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.

  8. Segmentation of radiographic images under topological constraints: application to the femur.

    PubMed

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-09-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.

  9. Learning across Languages: Bilingual Experience Supports Dual Language Statistical Word Segmentation

    ERIC Educational Resources Information Center

    Antovich, Dylan M.; Graf Estes, Katharine

    2018-01-01

    Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable-level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14-month-olds'…

  10. Optimal Multiple Surface Segmentation With Shape and Context Priors

    PubMed Central

    Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong

    2014-01-01

    Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309

  11. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Automatic liver segmentation from abdominal CT volumes using graph cuts and border marching.

    PubMed

    Liao, Miao; Zhao, Yu-Qian; Liu, Xi-Yao; Zeng, Ye-Zhan; Zou, Bei-Ji; Wang, Xiao-Fang; Shih, Frank Y

    2017-05-01

    Identifying liver regions from abdominal computed tomography (CT) volumes is an important task for computer-aided liver disease diagnosis and surgical planning. This paper presents a fully automatic method for liver segmentation from CT volumes based on graph cuts and border marching. An initial slice is segmented by density peak clustering. Based on pixel- and patch-wise features, an intensity model and a PCA-based regional appearance model are developed to enhance the contrast between liver and background. Then, these models as well as the location constraint estimated iteratively are integrated into graph cuts in order to segment the liver in each slice automatically. Finally, a vessel compensation method based on the border marching is used to increase the segmentation accuracy. Experiments are conducted on a clinical data set we created and also on the MICCAI2007 Grand Challenge liver data. The results show that the proposed intensity, appearance models, and the location constraint are significantly effective for liver recognition, and the undersegmented vessels can be compensated by the border marching based method. The segmentation performances in terms of VOE, RVD, ASD, RMSD, and MSD as well as the average running time achieved by our method on the SLIVER07 public database are 5.8 ± 3.2%, -0.1 ± 4.1%, 1.0 ± 0.5mm, 2.0 ± 1.2mm, 21.2 ± 9.3mm, and 4.7 minutes, respectively, which are superior to those of existing methods. The proposed method does not require time-consuming training process and statistical model construction, and is capable of dealing with complicated shapes and intensity variations successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A review on the multivariate statistical methods for dimensional reduction studies

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei

    2017-05-01

    In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.

  14. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning.

    PubMed

    Rundo, Leonardo; Stefano, Alessandro; Militello, Carmelo; Russo, Giorgio; Sabini, Maria Gabriella; D'Arrigo, Corrado; Marletta, Francesco; Ippolito, Massimo; Mauri, Giancarlo; Vitabile, Salvatore; Gilardi, Maria Carla

    2017-06-01

    Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [ 11 C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife ® treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTV MRI . A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment, the feasibility and the clinical value of BTV integration in Gamma Knife treatment planning were considered. Therefore, a qualitative evaluation was carried out by three experienced clinicians. The achieved experimental results showed that GTV and BTV segmentations are statistically correlated (Spearman's rank correlation coefficient: 0.898) but they have low similarity degree (average Dice Similarity Coefficient: 61.87 ± 14.64). Therefore, volume measurements as well as evaluation metrics values demonstrated that MRI and PET convey different but complementary imaging information. GTV and BTV could be combined to enhance treatment planning. In more than 50% of cases the CTV was strongly or moderately conditioned by metabolic imaging. Especially, BTV MRI enhanced the CTV more accurately than BTV in 25% of cases. The proposed fully automatic multimodal PET/MRI segmentation method is a valid operator-independent methodology helping the clinicians to define a CTV that includes both metabolic and morphologic information. BTV MRI and GTV should be considered for a comprehensive treatment planning. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    NASA Astrophysics Data System (ADS)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  16. Automated segmentation of the prostate in 3D MR images using a probabilistic atlas and a spatially constrained deformable model.

    PubMed

    Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent

    2010-04-01

    The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.

  17. Model-based segmentation of the facial nerve and chorda tympani in pediatric CT scans

    NASA Astrophysics Data System (ADS)

    Reda, Fitsum A.; Noble, Jack H.; Rivas, Alejandro; Labadie, Robert F.; Dawant, Benoit M.

    2011-03-01

    In image-guided cochlear implant surgery an electrode array is implanted in the cochlea to treat hearing loss. Access to the cochlea is achieved by drilling from the outer skull to the cochlea through the facial recess, a region bounded by the facial nerve and the chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The effectiveness of traditional segmentation approaches to achieve this is severely limited because the facial nerve and chorda are small structures (~1 mm and ~0.3 mm in diameter, respectively) and exhibit poor image contrast. We have recently proposed a technique to achieve this task in adult patients, which relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work we use the same method to segment pediatric scans. We show that substantial differences exist between the anatomy of children and the anatomy of adults, which lead to poor segmentation results when an adult model is used to segment a pediatric volume. We have built a new model for pediatric cases and we have applied it to ten scans. A leave-one-out validation experiment was conducted in which manually segmented structures were compared to automatically segmented structures. The maximum segmentation error was 1 mm. This result indicates that accurate segmentation of the facial nerve and chorda in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.

  18. Streamflow statistics for development of water rights claims for the Jarbidge Wild and Scenic River, Owyhee Canyonlands Wilderness, Idaho, 2013-14: a supplement to Scientific Investigations Report 2013-5212

    USGS Publications Warehouse

    Wood, Molly S.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Bureau of Land Management (BLM), estimated streamflow statistics for stream segments designated “Wild,” “Scenic,” or “Recreational” under the National Wild and Scenic Rivers System in the Owyhee Canyonlands Wilderness in southwestern Idaho. The streamflow statistics were used by the BLM to develop and file a draft, federal reserved water right claim to protect federally designated “outstanding remarkable values” in the Jarbidge River. The BLM determined that the daily mean streamflow equaled or exceeded 20, 50, and 80 percent of the time during bimonthly periods (two periods per month) and the bankfull (66.7-percent annual exceedance probability) streamflow are important thresholds for maintaining outstanding remarkable values. Although streamflow statistics for the Jarbidge River below Jarbidge, Nevada (USGS 13162225) were published previously in 2013 and used for the draft water right claim, the BLM and USGS have since recognized the need to refine streamflow statistics given the approximate 40 river mile distance and intervening tributaries between the original point of estimation (USGS 13162225) and at the mouth of the Jarbidge River, which is the downstream end of the Wild and Scenic River segment. A drainage-area-ratio method was used in 2013 to estimate bimonthly exceedance probability streamflow statistics at the mouth of the Jarbidge River based on available streamgage data on the Jarbidge and East Fork Jarbidge Rivers. The resulting bimonthly streamflow statistics were further adjusted using a scaling factor calculated from a water balance on streamflow statistics calculated for the Bruneau and East Fork Bruneau Rivers and Sheep Creek. The final, adjusted bimonthly exceedance probability and bankfull streamflow statistics compared well with available verification datasets (including discrete streamflow measurements made at the mouth of the Jarbidge River) and are considered the best available estimates for streamflow statistics in the Jarbidge Wild and Scenic River segment.

  19. What's in a Face? Visual Contributions to Speech Segmentation

    ERIC Educational Resources Information Center

    Mitchel, Aaron D.; Weiss, Daniel J.

    2010-01-01

    Recent research has demonstrated that adults successfully segment two interleaved artificial speech streams with incongruent statistics (i.e., streams whose combined statistics are noisier than the encapsulated statistics) only when provided with an indexical cue of speaker voice. In a series of five experiments, our study explores whether…

  20. Segmentation of Brain Lesions in MRI and CT Scan Images: A Hybrid Approach Using k-Means Clustering and Image Morphology

    NASA Astrophysics Data System (ADS)

    Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar

    2018-04-01

    Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.

  1. Importance of reporting segmental bowel preparation scores during colonoscopy in clinical practice.

    PubMed

    Jain, Deepanshu; Momeni, Mojdeh; Krishnaiah, Mahesh; Anand, Sury; Singhal, Shashideep

    2015-04-07

    To evaluate the impact of reporting bowel preparation using Boston Bowel Preparation Scale (BBPS) in clinical practice. The study was a prospective observational cohort study which enrolled subjects reporting for screening colonoscopy. All subjects received a gallon of polyethylene glycol as bowel preparation regimen. After colonoscopy the endoscopists determined quality of bowel preparation using BBPS. Segmental scores were combined to calculate composite BBPS. Site and size of the polyps detected was recorded. Pathology reports were reviewed to determine advanced adenoma detection rates (AADR). Segmental AADR's were calculated and categorized based on the segmental BBPS to determine the differential impact of bowel prep on AADR. Three hundred and sixty subjects were enrolled in the study with a mean age of 59.2 years, 36.3% males and 63.8% females. Four subjects with incomplete colonoscopy due BBPS of 0 in any segment were excluded. Based on composite BBPS subjects were divided into 3 groups; Group-0 (poor bowel prep, BBPS 0-3) n = 26 (7.3%), Group-1 (Suboptimal bowel prep, BBPS 4-6) n = 121 (34%) and Group-2 (Adequate bowel prep, BBPS 7-9) n = 209 (58.7%). AADR showed a linear trend through Group-1 to 3; with an AADR of 3.8%, 14.8% and 16.7% respectively. Also seen was a linear increasing trend in segmental AADR with improvement in segmental BBPS. There was statistical significant difference between AADR among Group 0 and 2 (3.8% vs 16.7%, P < 0.05), Group 1 and 2 (14.8% vs 16.7%, P < 0.05) and Group 0 and 1 (3.8% vs 14.8%, P < 0.05). χ(2) method was used to compute P value for determining statistical significance. Segmental AADRs correlate with segmental BBPS. It is thus valuable to report segmental BBPS in colonoscopy reports in clinical practice.

  2. Three-Dimensional Eyeball and Orbit Volume Modification After LeFort III Midface Distraction.

    PubMed

    Smektala, Tomasz; Nysjö, Johan; Thor, Andreas; Homik, Aleksandra; Sporniak-Tutak, Katarzyna; Safranow, Krzysztof; Dowgierd, Krzysztof; Olszewski, Raphael

    2015-07-01

    The aim of our study was to evaluate orbital volume modification with LeFort III midface distraction in patients with craniosynostosis and its influence on eyeball volume and axial diameter modification. Orbital volume was assessed by the semiautomatic segmentation method based on deformable surface models and on 3-dimensional (3D) interaction with haptics. The eyeball volumes and diameters were automatically calculated after manual segmentation of computed tomographic scans with 3D slicer software. The mean, minimal, and maximal differences as well as the standard deviation and intraclass correlation coefficient (ICC) for intraobserver and interobserver measurements reliability were calculated. The Wilcoxon signed rank test was used to compare measured values before and after surgery. P < 0.05 was considered statistically significant. Intraobserver and interobserver ICC for haptic-aided semiautomatic orbital volume measurements were 0.98 and 0.99, respectively. The intraobserver and interobserver ICC values for manual segmentation of the eyeball volume were 0.87 and 0.86, respectively. The orbital volume increased significantly after surgery: 30.32% (mean, 5.96  mL) for the left orbit and 31.04% (mean, 6.31  mL) for the right orbit. The mean increase in eyeball volume was 12.3%. The mean increases in the eyeball axial dimensions were 7.3%, 9.3%, and 4.4% for the X-, Y-, and Z-axes, respectively. The Wilcoxon signed rank test showed that preoperative and postoperative eyeball volumes, as well as the diameters along the X- and Y-axes, were statistically significant. Midface distraction in patients with syndromic craniostenosis results in a significant increase (P < 0.05) in the orbit and eyeball volumes. The 2 methods (haptic-aided semiautomatic segmentation and manual 3D slicer segmentation) are reproducible techniques for orbit and eyeball volume measurements.

  3. Semi-automated method to measure pneumonia severity in mice through computed tomography (CT) scan analysis

    NASA Astrophysics Data System (ADS)

    Johri, Ansh; Schimel, Daniel; Noguchi, Audrey; Hsu, Lewis L.

    2010-03-01

    Imaging is a crucial clinical tool for diagnosis and assessment of pneumonia, but quantitative methods are lacking. Micro-computed tomography (micro CT), designed for lab animals, provides opportunities for non-invasive radiographic endpoints for pneumonia studies. HYPOTHESIS: In vivo micro CT scans of mice with early bacterial pneumonia can be scored quantitatively by semiautomated imaging methods, with good reproducibility and correlation with bacterial dose inoculated, pneumonia survival outcome, and radiologists' scores. METHODS: Healthy mice had intratracheal inoculation of E. coli bacteria (n=24) or saline control (n=11). In vivo micro CT scans were performed 24 hours later with microCAT II (Siemens). Two independent radiologists scored the extent of airspace abnormality, on a scale of 0 (normal) to 24 (completely abnormal). Using the Amira 5.2 software (Mercury Computer Systems), a histogram distribution of voxel counts between the Hounsfield range of -510 to 0 was created and analyzed, and a segmentation procedure was devised. RESULTS: A t-test was performed to determine whether there was a significant difference in the mean voxel value of each mouse in the three experimental groups: Saline Survivors, Pneumonia Survivors, and Pneumonia Non-survivors. It was found that the voxel count method was able to statistically tell apart the Saline Survivors from the Pneumonia Survivors, the Saline Survivors from the Pneumonia Non-survivors, but not the Pneumonia Survivors vs. Pneumonia Non-survivors. The segmentation method, however, was successfully able to distinguish the two Pneumonia groups. CONCLUSION: We have pilot-tested an evaluation of early pneumonia in mice using micro CT and a semi-automated method for lung segmentation and scoring system. Statistical analysis indicates that the system is reliable and merits further evaluation.

  4. Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.

    PubMed

    Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J

    2012-09-01

    Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.

  5. A KST framework for correlation network construction from time series signals

    NASA Astrophysics Data System (ADS)

    Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping

    2018-04-01

    A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.

  6. Comparison of long-term mortality of acute ST-segment elevation myocardial infarction and non-ST-segment elevation acute coronary syndrome patients after percutaneous coronary intervention

    PubMed Central

    Ren, Lihui; Ye, Huiming; Wang, Ping; Cui, Yuxia; Cao, Shichang; Lv, Shuzheng

    2014-01-01

    Background and aims: This study is to compare the short-term and long-term mortality in patients with ST-segment elevation myocardial infarction (STEMI) and non-ST-segment elevation acute coronary syndrome (NSTE-ACS) after percutaneous coronary intervention (PCI). Methods and results: A total of 266 STEMI patients and 140 NSTE-ACS patients received PCI. Patients were followed up by telephone or at medical record or case statistics center and were followed up for 4 years. Descriptive statistics and multivariate survival analyses were employed to compare the mortality in STEMI and NSTE-ACS. All statistical analyses were performed by SPSS19.0 software package. NSTE-ACS patients had significantly higher clinical and angiographic risk profiles at baseline. During the 4-year follow-up, all-cause mortality in STEMI was significantly higher than that in NSTE-ACS after coronary stent placement (HR 1.496, 95% CI 1.019-2.197). In a landmark analysis no difference was seen in all-cause mortality for both STEMI and NSTE-ACS between 6 month and 4 years of follow-up (HR 1.173, 95% CI 0.758-1.813). Conclusions: Patients with STEMI have a worse long-term prognosis compared to patients with NSTE-ACS after PCI, due to higher short-term mortality. However, NSTE-ACS patients have a worse long-term survival after 6 months. PMID:25664077

  7. Statistical label fusion with hierarchical performance models

    PubMed Central

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-01-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809

  8. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    NASA Astrophysics Data System (ADS)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  10. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  11. A semiautomatic CT-based ensemble segmentation of lung tumors: comparison with oncologists' delineations and with the surgical specimen.

    PubMed

    Rios Velazquez, Emmanuel; Aerts, Hugo J W L; Gu, Yuhua; Goldgof, Dmitry B; De Ruysscher, Dirk; Dekker, Andre; Korn, René; Gillies, Robert J; Lambin, Philippe

    2012-11-01

    To assess the clinical relevance of a semiautomatic CT-based ensemble segmentation method, by comparing it to pathology and to CT/PET manual delineations by five independent radiation oncologists in non-small cell lung cancer (NSCLC). For 20 NSCLC patients (stages Ib-IIIb) the primary tumor was delineated manually on CT/PET scans by five independent radiation oncologists and segmented using a CT based semi-automatic tool. Tumor volume and overlap fractions between manual and semiautomatic-segmented volumes were compared. All measurements were correlated with the maximal diameter on macroscopic examination of the surgical specimen. Imaging data are available on www.cancerdata.org. High overlap fractions were observed between the semi-automatically segmented volumes and the intersection (92.5±9.0, mean±SD) and union (94.2±6.8) of the manual delineations. No statistically significant differences in tumor volume were observed between the semiautomatic segmentation (71.4±83.2 cm(3), mean±SD) and manual delineations (81.9±94.1 cm(3); p=0.57). The maximal tumor diameter of the semiautomatic-segmented tumor correlated strongly with the macroscopic diameter of the primary tumor (r=0.96). Semiautomatic segmentation of the primary tumor on CT demonstrated high agreement with CT/PET manual delineations and strongly correlated with the macroscopic diameter considered as the "gold standard". This method may be used routinely in clinical practice and could be employed as a starting point for treatment planning, target definition in multi-center clinical trials or for high throughput data mining research. This method is particularly suitable for peripherally located tumors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Clinical evaluation of multi-atlas based segmentation of lymph node regions in head and neck and prostate cancer patients.

    PubMed

    Sjöberg, Carl; Lundmark, Martin; Granberg, Christoffer; Johansson, Silvia; Ahnesjö, Anders; Montelius, Anders

    2013-10-03

    Semi-automated segmentation using deformable registration of selected atlas cases consisting of expert segmented patient images has been proposed to facilitate the delineation of lymph node regions for three-dimensional conformal and intensity-modulated radiotherapy planning of head and neck and prostate tumours. Our aim is to investigate if fusion of multiple atlases will lead to clinical workload reductions and more accurate segmentation proposals compared to the use of a single atlas segmentation, due to a more complete representation of the anatomical variations. Atlases for lymph node regions were constructed using 11 head and neck patients and 15 prostate patients based on published recommendations for segmentations. A commercial registration software (Velocity AI) was used to create individual segmentations through deformable registration. Ten head and neck patients, and ten prostate patients, all different from the atlas patients, were randomly chosen for the study from retrospective data. Each patient was first delineated three times, (a) manually by a radiation oncologist, (b) automatically using a single atlas segmentation proposal from a chosen atlas and (c) automatically by fusing the atlas proposals from all cases in the database using the probabilistic weighting fusion algorithm. In a subsequent step a radiation oncologist corrected the segmentation proposals achieved from step (b) and (c) without using the result from method (a) as reference. The time spent for editing the segmentations was recorded separately for each method and for each individual structure. Finally, the Dice Similarity Coefficient and the volume of the structures were used to evaluate the similarity between the structures delineated with the different methods. For the single atlas method, the time reduction compared to manual segmentation was 29% and 23% for head and neck and pelvis lymph nodes, respectively, while editing the fused atlas proposal resulted in time reductions of 49% and 34%. The average volume of the fused atlas proposals was only 74% of the manual segmentation for the head and neck cases and 82% for the prostate cases due to a blurring effect from the fusion process. After editing of the proposals the resulting volume differences were no longer statistically significant, although a slight influence by the proposals could be noticed since the average edited volume was still slightly smaller than the manual segmentation, 9% and 5%, respectively. Segmentation based on fusion of multiple atlases reduces the time needed for delineation of lymph node regions compared to the use of a single atlas segmentation. Even though the time saving is large, the quality of the segmentation is maintained compared to manual segmentation.

  13. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery.

    PubMed

    Hamill, Daniel; Buscombe, Daniel; Wheaton, Joseph M

    2018-01-01

    Side scan sonar in low-cost 'fishfinder' systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar.

  14. Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels.

    PubMed

    Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong

    2018-04-01

    Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Virtual modeling of polycrystalline structures of materials using particle packing algorithms and Laguerre cells

    NASA Astrophysics Data System (ADS)

    Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló

    2018-04-01

    The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.

  16. Lagged segmented Poincaré plot analysis for risk stratification in patients with dilated cardiomyopathy.

    PubMed

    Voss, Andreas; Fischer, Claudia; Schroeder, Rico; Figulla, Hans R; Goernig, Matthias

    2012-07-01

    The objectives of this study were to introduce a new type of heart-rate variability analysis improving risk stratification in patients with idiopathic dilated cardiomyopathy (DCM) and to provide additional information about impaired heart beat generation in these patients. Beat-to-beat intervals (BBI) of 30-min ECGs recorded from 91 DCM patients and 21 healthy subjects were analyzed applying the lagged segmented Poincaré plot analysis (LSPPA) method. LSPPA includes the Poincaré plot reconstruction with lags of 1-100, rotating the cloud of points, its normalized segmentation adapted to their standard deviations, and finally, a frequency-dependent clustering. The lags were combined into eight different clusters representing specific frequency bands within 0.012-1.153 Hz. Statistical differences between low- and high-risk DCM could be found within the clusters II-VIII (e.g., cluster IV: 0.033-0.038 Hz; p = 0.0002; sensitivity = 85.7 %; specificity = 71.4 %). The multivariate statistics led to a sensitivity of 92.9 %, specificity of 85.7 % and an area under the curve of 92.1 % discriminating these patient groups. We introduced the LSPPA method to investigate time correlations in BBI time series. We found that LSPPA contributes considerably to risk stratification in DCM and yields the highest discriminant power in the low and very low-frequency bands.

  17. A discriminative model-constrained graph cuts approach to fully automated pediatric brain tumor segmentation in 3-D MRI.

    PubMed

    Wels, Michael; Carneiro, Gustavo; Aplas, Alexander; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2008-01-01

    In this paper we present a fully automated approach to the segmentation of pediatric brain tumors in multi-spectral 3-D magnetic resonance images. It is a top-down segmentation approach based on a Markov random field (MRF) model that combines probabilistic boosting trees (PBT) and lower-level segmentation via graph cuts. The PBT algorithm provides a strong discriminative observation model that classifies tumor appearance while a spatial prior takes into account the pair-wise homogeneity in terms of classification labels and multi-spectral voxel intensities. The discriminative model relies not only on observed local intensities but also on surrounding context for detecting candidate regions for pathology. A mathematically sound formulation for integrating the two approaches into a unified statistical framework is given. The proposed method is applied to the challenging task of detection and delineation of pediatric brain tumors. This segmentation task is characterized by a high non-uniformity of both the pathology and the surrounding non-pathologic brain tissue. A quantitative evaluation illustrates the robustness of the proposed method. Despite dealing with more complicated cases of pediatric brain tumors the results obtained are mostly better than those reported for current state-of-the-art approaches to 3-D MR brain tumor segmentation in adult patients. The entire processing of one multi-spectral data set does not require any user interaction, and takes less time than previously proposed methods.

  18. [Analyses of segment motor function in patients with degenerative lumbar disease on the treatment of WavefleX dynamic stabilization system].

    PubMed

    Wu, Junsong; Du, Junhua; Jiang, Xiangyun; Wang, Quan; Li, Xigong; Du, Jingyu; Lin, Xiangjin

    2014-06-17

    To explore the changes of range-of-motion (ROM) in patients with degenerative lumbar disease on the treatment of WavefleX dynamic stabilization system and examine the postoperative lumbar regularity and tendency of ROM. Nine patients with degenerative lumbar disease on the treatment of WavefleX dynamic stabilization system were followed up with respect to ROMs at 5 timepoints within 12 months. Records of ROM were made for instrumented segments, adjacent segments and total lumbar. Compared with preoperation, ROMs in non-fusional segments with WavefleX dynamic stabilization system decreased statistical significantly (P < 0.05 or P < 0.01) at different timepoints; ROMs in adjacent segments increased at some levels without wide statistical significance. The exception was single L3/4 at Month 12 (P < 0.05) versus control group simultaneously at the levels of L3/4, L4/5 and L5/S1, ROMs decreased at Months 6 and 12 with wide statistical significance (P < 0.05 or P < 0.01). ROMs in total lumbar had statistical significant decrease (P < 0.01) in both group of non-fusional segments and hybrid group of non-fusion and fusion. The trends of continuous augments were observed during follow-ups. Statistically significant augments were also acquired at 4 timepoints as compared to control group (P < 0.01). The treatment of degenerative lumbar diseases with WavefleX dynamic stabilization system may limit excessive extension/inflexion and preserve some motor functions. Moreover, it can sustain physiological lordosis, decrease and transfer disc load in adjacent segments to prevent early degeneration of adjacent segment. Trends of motor function augment in total lumbar need to be confirmed during future long-term follow-ups.

  19. Speech Segmentation by Statistical Learning Depends on Attention

    ERIC Educational Resources Information Center

    Toro, Juan M.; Sinnett, Scott; Soto-Faraco, Salvador

    2005-01-01

    We addressed the hypothesis that word segmentation based on statistical regularities occurs without the need of attention. Participants were presented with a stream of artificial speech in which the only cue to extract the words was the presence of statistical regularities between syllables. Half of the participants were asked to passively listen…

  20. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    PubMed

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  1. The change of adjacent segment after cervical disc arthroplasty compared with anterior cervical discectomy and fusion: a meta-analysis of randomized controlled trials.

    PubMed

    Dong, Liang; Xu, Zhengwei; Chen, Xiujin; Wang, Dongqi; Li, Dichen; Liu, Tuanjing; Hao, Dingjun

    2017-10-01

    Many meta-analyses have been performed to study the efficacy of cervical disc arthroplasty (CDA) compared with anterior cervical discectomy and fusion (ACDF); however, there are few data referring to adjacent segment within these meta-analyses, or investigators are unable to arrive at the same conclusion in the few meta-analyses about adjacent segment. With the increased concerns surrounding adjacent segment degeneration (ASDeg) and adjacent segment disease (ASDis) after anterior cervical surgery, it is necessary to perform a comprehensive meta-analysis to analyze adjacent segment parameters. To perform a comprehensive meta-analysis to elaborate adjacent segment motion, degeneration, disease, and reoperation of CDA compared with ACDF. Meta-analysis of randomized controlled trials (RCTs). PubMed, Embase, and Cochrane Library were searched for RCTs comparing CDA and ACDF before May 2016. The analysis parameters included follow-up time, operative segments, adjacent segment motion, ASDeg, ASDis, and adjacent segment reoperation. The risk of bias scale was used to assess the papers. Subgroup analysis and sensitivity analysis were used to analyze the reason for high heterogeneity. Twenty-nine RCTs fulfilled the inclusion criteria. Compared with ACDF, the rate of adjacent segment reoperation in the CDA group was significantly lower (p<.01), and the advantage of that group in reducing adjacent segment reoperation increases with increasing follow-up time by subgroup analysis. There was no statistically significant difference in ASDeg between CDA and ACDF within the 24-month follow-up period; however, the rate of ASDeg in CDA was significantly lower than that of ACDF with the increase in follow-up time (p<.01). There was no statistically significant difference in ASDis between CDA and ACDF (p>.05). Cervical disc arthroplasty provided a lower adjacent segment range of motion (ROM) than did ACDF, but the difference was not statistically significant. Compared with ACDF, the advantages of CDA were lower ASDeg and adjacent segment reoperation. However, there was no statistically significant difference in ASDis and adjacent segment ROM. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Neutron Resonance Spin Determination Using Multi-Segmented Detector DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baramsai, B.; Mitchell, G. E.; Chyzh, A.

    2011-06-01

    A sensitive method to determine the spin of neutron resonances is introduced based on the statistical pattern recognition technique. The new method was used to assign the spins of s-wave resonances in {sup 155}Gd. The experimental neutron capture data for these nuclei were measured with the DANCE (Detector for Advanced Neutron Capture Experiment) calorimeter at the Los Alamos Neutron Science Center. The highly segmented calorimeter provided detailed multiplicity distributions of the capture {gamma}-rays. Using this information, the spins of the neutron capture resonances were determined. With these new spin assignments, level spacings are determined separately for s-wave resonances with J{supmore » {pi}} = 1{sup -} and 2{sup -}.« less

  3. A Multiphase Validation of Atlas-Based Automatic and Semiautomatic Segmentation Strategies for Prostate MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London

    2013-01-01

    Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.« less

  4. A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations

    PubMed Central

    Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary

    2016-01-01

    There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699

  5. Method to Reduce Target Motion Through Needle-Tissue Interactions.

    PubMed

    Oldfield, Matthew J; Leibinger, Alexander; Seah, Tian En Timothy; Rodriguez Y Baena, Ferdinando

    2015-11-01

    During minimally invasive surgical procedures, it is often important to deliver needles to particular tissue volumes. Needles, when interacting with a substrate, cause deformation and target motion. To reduce reliance on compensatory intra-operative imaging, a needle design and novel delivery mechanism is proposed. Three-dimensional finite element simulations of a multi-segment needle inserted into a pre-existing crack are presented. The motion profiles of the needle segments are varied to identify methods that reduce target motion. Experiments are then performed by inserting a needle into a gelatine tissue phantom and measuring the internal target motion using digital image correlation. Simulations indicate that target motion is reduced when needle segments are stroked cyclically and utilise a small amount of retraction instead of being held stationary. Results are confirmed experimentally by statistically significant target motion reductions of more than 8% during cyclic strokes and 29% when also incorporating retraction, with the same net insertion speed. By using a multi-segment needle and taking advantage of frictional interactions on the needle surface, it is demonstrated that target motion ahead of an advancing needle can be substantially reduced.

  6. Anthropometric and biomechanical characteristics on body segments of Koreans.

    PubMed

    Park, S J; Kim, C B; Park, S C

    1999-05-01

    This paper documents the physical measurements of the Korean population in order to construct a data base for ergonomic design. The dimension, volume, density, mass, and center of mass of Koreans whose ages range from 7 to 49 were investigated. Sixty-five male subjects and sixty-nine female subjects participated. Eight body segments (head with neck, trunk, thigh, shank, foot, upper arm, forearm and hand) were directly measured with a Martin-type anthropometer, and the immersion method was adopted to measure the volume of body segments. After this, densities were computed by the density equations in Drillis and Contini (1966). The reaction board method was employed for the measurement of the center of mass. Obtained data were compared with the results in the literature. The results in this paper showed different features on body segment parameters comparing with the results in the literature. The constructed data base can be applied to statistical guideline for product design, workspace design, design of clothing and tools, furniture design and construction of biomechanical models for Korean. Also, they can be extended to the application areas for Mongolian.

  7. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  8. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.

  9. Volumetric glioma quantification: comparison of manual and semi-automatic tumor segmentation for the quantification of tumor growth.

    PubMed

    Odland, Audun; Server, Andres; Saxhaug, Cathrine; Breivik, Birger; Groote, Rasmus; Vardal, Jonas; Larsson, Christopher; Bjørnerud, Atle

    2015-11-01

    Volumetric magnetic resonance imaging (MRI) is now widely available and routinely used in the evaluation of high-grade gliomas (HGGs). Ideally, volumetric measurements should be included in this evaluation. However, manual tumor segmentation is time-consuming and suffers from inter-observer variability. Thus, tools for semi-automatic tumor segmentation are needed. To present a semi-automatic method (SAM) for segmentation of HGGs and to compare this method with manual segmentation performed by experts. The inter-observer variability among experts manually segmenting HGGs using volumetric MRIs was also examined. Twenty patients with HGGs were included. All patients underwent surgical resection prior to inclusion. Each patient underwent several MRI examinations during and after adjuvant chemoradiation therapy. Three experts performed manual segmentation. The results of tumor segmentation by the experts and by the SAM were compared using Dice coefficients and kappa statistics. A relatively close agreement was seen among two of the experts and the SAM, while the third expert disagreed considerably with the other experts and the SAM. An important reason for this disagreement was a different interpretation of contrast enhancement as either surgically-induced or glioma-induced. The time required for manual tumor segmentation was an average of 16 min per scan. Editing of the tumor masks produced by the SAM required an average of less than 2 min per sample. Manual segmentation of HGG is very time-consuming and using the SAM could increase the efficiency of this process. However, the accuracy of the SAM ultimately depends on the expert doing the editing. Our study confirmed a considerable inter-observer variability among experts defining tumor volume from volumetric MRIs. © The Foundation Acta Radiologica 2014.

  10. Ground target recognition using rectangle estimation.

    PubMed

    Grönwall, Christina; Gustafsson, Fredrik; Millnert, Mille

    2006-11-01

    We propose a ground target recognition method based on 3-D laser radar data. The method handles general 3-D scattered data. It is based on the fact that man-made objects of complex shape can be decomposed to a set of rectangles. The ground target recognition method consists of four steps; 3-D size and orientation estimation, target segmentation into parts of approximately rectangular shape, identification of segments that represent the target's functional/main parts, and target matching with CAD models. The core in this approach is rectangle estimation. The performance of the rectangle estimation method is evaluated statistically using Monte Carlo simulations. A case study on tank recognition is shown, where 3-D data from four fundamentally different types of laser radar systems are used. Although the approach is tested on rather few examples, we believe that the approach is promising.

  11. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandularmore » tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0.92 for FGT% and r = 0.93 for |FGT|, and the automated segmentation is not statistically significantly different (p = 0.46 for FGT% and p = 0.55 for |FGT|). The bilateral correlation between left breasts and right breasts for the FGT% is 0.94, 0.92, and 0.95 for reader 1, reader 2, and the FCM-Atlas, respectively; likewise, for the |FGT|, it is 0.92, 0.92, and 0.93, respectively. For the spatial segmentation agreement, the automated algorithm achieves a DSC of 0.69 ± 0.1 when compared to reader 1 and 0.61 ± 0.1 for reader 2, respectively, while the DSC between the two readers’ manual segmentation is 0.67 ± 0.15. Additional robustness analysis shows that the segmentation performance of the authors' method is stable both with respect to selecting different cases and to varying the number of cases needed to construct the prior probability atlas. The authors' results also show that the proposed FCM-Atlas method outperforms the commonly used two-cluster FCM-alone method. The authors' method runs at ∼5 min for each 3D bilateral MR scan (56 slices) for computing the FGT% and |FGT|, compared to ∼55 min needed for manual segmentation for the same purpose. Conclusions: The authors' method achieves robust segmentation and can serve as an efficient tool for processing large clinical datasets for quantifying the fibroglandular tissue content in breast MRI. It holds a great potential to support clinical applications in the future including breast cancer risk assessment.« less

  12. Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.

    2013-12-15

    Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandularmore » tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0.92 for FGT% and r = 0.93 for |FGT|, and the automated segmentation is not statistically significantly different (p = 0.46 for FGT% and p = 0.55 for |FGT|). The bilateral correlation between left breasts and right breasts for the FGT% is 0.94, 0.92, and 0.95 for reader 1, reader 2, and the FCM-Atlas, respectively; likewise, for the |FGT|, it is 0.92, 0.92, and 0.93, respectively. For the spatial segmentation agreement, the automated algorithm achieves a DSC of 0.69 ± 0.1 when compared to reader 1 and 0.61 ± 0.1 for reader 2, respectively, while the DSC between the two readers’ manual segmentation is 0.67 ± 0.15. Additional robustness analysis shows that the segmentation performance of the authors' method is stable both with respect to selecting different cases and to varying the number of cases needed to construct the prior probability atlas. The authors' results also show that the proposed FCM-Atlas method outperforms the commonly used two-cluster FCM-alone method. The authors' method runs at ∼5 min for each 3D bilateral MR scan (56 slices) for computing the FGT% and |FGT|, compared to ∼55 min needed for manual segmentation for the same purpose. Conclusions: The authors' method achieves robust segmentation and can serve as an efficient tool for processing large clinical datasets for quantifying the fibroglandular tissue content in breast MRI. It holds a great potential to support clinical applications in the future including breast cancer risk assessment.« less

  13. Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12

    PubMed Central

    Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J

    2014-01-01

    We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210

  14. Improving vertebra segmentation through joint vertebra-rib atlases

    NASA Astrophysics Data System (ADS)

    Wang, Yinong; Yao, Jianhua; Roth, Holger R.; Burns, Joseph E.; Summers, Ronald M.

    2016-03-01

    Accurate spine segmentation allows for improved identification and quantitative characterization of abnormalities of the vertebra, such as vertebral fractures. However, in existing automated vertebra segmentation methods on computed tomography (CT) images, leakage into nearby bones such as ribs occurs due to the close proximity of these visibly intense structures in a 3D CT volume. To reduce this error, we propose the use of joint vertebra-rib atlases to improve the segmentation of vertebrae via multi-atlas joint label fusion. Segmentation was performed and evaluated on CTs containing 106 thoracic and lumbar vertebrae from 10 pathological and traumatic spine patients on an individual vertebra level basis. Vertebra atlases produced errors where the segmentation leaked into the ribs. The use of joint vertebra-rib atlases produced a statistically significant increase in the Dice coefficient from 92.5 +/- 3.1% to 93.8 +/- 2.1% for the left and right transverse processes and a decrease in the mean and max surface distance from 0.75 +/- 0.60mm and 8.63 +/- 4.44mm to 0.30 +/- 0.27mm and 3.65 +/- 2.87mm, respectively.

  15. Quantification and Segmentation of Brain Tissues from MR Images: A Probabilistic Neural Network Approach

    PubMed Central

    Wang, Yue; Adalý, Tülay; Kung, Sun-Yuan; Szabo, Zsolt

    2007-01-01

    This paper presents a probabilistic neural network based technique for unsupervised quantification and segmentation of brain tissues from magnetic resonance images. It is shown that this problem can be solved by distribution learning and relaxation labeling, resulting in an efficient method that may be particularly useful in quantifying and segmenting abnormal brain tissues where the number of tissue types is unknown and the distributions of tissue types heavily overlap. The new technique uses suitable statistical models for both the pixel and context images and formulates the problem in terms of model-histogram fitting and global consistency labeling. The quantification is achieved by probabilistic self-organizing mixtures and the segmentation by a probabilistic constraint relaxation network. The experimental results show the efficient and robust performance of the new algorithm and that it outperforms the conventional classification based approaches. PMID:18172510

  16. Simulation of brain tumors in MR images for evaluation of segmentation efficacy.

    PubMed

    Prastawa, Marcel; Bullitt, Elizabeth; Gerig, Guido

    2009-04-01

    Obtaining validation data and comparison metrics for segmentation of magnetic resonance images (MRI) are difficult tasks due to the lack of reliable ground truth. This problem is even more evident for images presenting pathology, which can both alter tissue appearance through infiltration and cause geometric distortions. Systems for generating synthetic images with user-defined degradation by noise and intensity inhomogeneity offer the possibility for testing and comparison of segmentation methods. Such systems do not yet offer simulation of sufficiently realistic looking pathology. This paper presents a system that combines physical and statistical modeling to generate synthetic multi-modal 3D brain MRI with tumor and edema, along with the underlying anatomical ground truth, Main emphasis is placed on simulation of the major effects known for tumor MRI, such as contrast enhancement, local distortion of healthy tissue, infiltrating edema adjacent to tumors, destruction and deformation of fiber tracts, and multi-modal MRI contrast of healthy tissue and pathology. The new method synthesizes pathology in multi-modal MRI and diffusion tensor imaging (DTI) by simulating mass effect, warping and destruction of white matter fibers, and infiltration of brain tissues by tumor cells. We generate synthetic contrast enhanced MR images by simulating the accumulation of contrast agent within the brain. The appearance of the the brain tissue and tumor in MRI is simulated by synthesizing texture images from real MR images. The proposed method is able to generate synthetic ground truth and synthesized MR images with tumor and edema that exhibit comparable segmentation challenges to real tumor MRI. Such image data sets will find use in segmentation reliability studies, comparison and validation of different segmentation methods, training and teaching, or even in evaluating standards for tumor size like the RECIST criteria (response evaluation criteria in solid tumors).

  17. A marker-based watershed method for X-ray image segmentation.

    PubMed

    Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao

    2014-03-01

    Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. A knowledge-guided active model method of cortical structure segmentation on pediatric MR images.

    PubMed

    Shan, Zuyao Y; Parra, Carlos; Ji, Qing; Jain, Jinesh; Reddick, Wilburn E

    2006-10-01

    To develop an automated method for quantification of cortical structures on pediatric MR images. A knowledge-guided active model (KAM) approach was proposed with a novel object function similar to the Gibbs free energy function. Triangular mesh models were transformed to images of a given subject by maximizing entropy, and then actively slithered to boundaries of structures by minimizing enthalpy. Volumetric results and image similarities of 10 different cortical structures segmented by KAM were compared with those traced manually. Furthermore, the segmentation performances of KAM and SPM2, (statistical parametric mapping, a MATLAB software package) were compared. The averaged volumetric agreements between KAM- and manually-defined structures (both 0.95 for structures in healthy children and children with medulloblastoma) were higher than the volumetric agreement for SPM2 (0.90 and 0.80, respectively). The similarity measurements (kappa) between KAM- and manually-defined structures (0.95 and 0.93, respectively) were higher than those for SPM2 (both 0.86). We have developed a novel automatic algorithm, KAM, for segmentation of cortical structures on MR images of pediatric patients. Our preliminary results indicated that when segmenting cortical structures, KAM was in better agreement with manually-delineated structures than SPM2. KAM can potentially be used to segment cortical structures for conformal radiation therapy planning and for quantitative evaluation of changes in disease or abnormality. Copyright (c) 2006 Wiley-Liss, Inc.

  19. A complete-pelvis segmentation framework for image-free total hip arthroplasty (THA): methodology and clinical study.

    PubMed

    Xie, Weiguo; Franke, Jochen; Chen, Cheng; Grützner, Paul A; Schumann, Steffen; Nolte, Lutz-P; Zheng, Guoyan

    2015-06-01

    Complete-pelvis segmentation in antero-posterior pelvic radiographs is required to create a patient-specific three-dimensional pelvis model for surgical planning and postoperative assessment in image-free navigation of total hip arthroplasty. A fast and robust framework for accurately segmenting the complete pelvis is presented, consisting of two consecutive modules. In the first module, a three-stage method was developed to delineate the left hemi-pelvis based on statistical appearance and shape models. To handle complex pelvic structures, anatomy-specific information processing techniques were employed. As the input to the second module, the delineated left hemi-pelvis was then reflected about an estimated symmetry line of the radiograph to initialize the right hemi-pelvis segmentation. The right hemi-pelvis was segmented by the same three-stage method, Two experiments conducted on respectively 143 and 40 AP radiographs demonstrated a mean segmentation accuracy of 1.61±0.68 mm. A clinical study to investigate the postoperative assessment of acetabular cup orientations based on the proposed framework revealed an average accuracy of 1.2°±0.9° and 1.6°±1.4° for anteversion and inclination, respectively. Delineation of each radiograph costs less than one minute. Despite further validation needed, the preliminary results implied the underlying clinical applicability of the proposed framework for image-free THA. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis

    PubMed Central

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878

  1. Development of quantitative analysis method for stereotactic brain image: assessment of reduced accumulation in extent and severity using anatomical segmentation.

    PubMed

    Mizumura, Sunao; Kumita, Shin-ichiro; Cho, Keiichi; Ishihara, Makiko; Nakajo, Hidenobu; Toba, Masahiro; Kumazaki, Tatsuo

    2003-06-01

    Through visual assessment by three-dimensional (3D) brain image analysis methods using stereotactic brain coordinates system, such as three-dimensional stereotactic surface projections and statistical parametric mapping, it is difficult to quantitatively assess anatomical information and the range of extent of an abnormal region. In this study, we devised a method to quantitatively assess local abnormal findings by segmenting a brain map according to anatomical structure. Through quantitative local abnormality assessment using this method, we studied the characteristics of distribution of reduced blood flow in cases with dementia of the Alzheimer type (DAT). Using twenty-five cases with DAT (mean age, 68.9 years old), all of whom were diagnosed as probable Alzheimer's disease based on NINCDS-ADRDA, we collected I-123 iodoamphetamine SPECT data. A 3D brain map using the 3D-SSP program was compared with the data of 20 cases in the control group, who age-matched the subject cases. To study local abnormalities on the 3D images, we divided the whole brain into 24 segments based on anatomical classification. We assessed the extent of an abnormal region in each segment (rate of the coordinates with a Z-value that exceeds the threshold value, in all coordinates within a segment), and severity (average Z-value of the coordinates with a Z-value that exceeds the threshold value). This method clarified orientation and expansion of reduced accumulation, through classifying stereotactic brain coordinates according to the anatomical structure. This method was considered useful for quantitatively grasping distribution abnormalities in the brain and changes in abnormality distribution.

  2. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    PubMed

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Distribution of immunoglobulin G antibody secretory cells in small intestine of Bactrian camels (Camelus bactrianus).

    PubMed

    Zhang, Wang-Dong; Wang, Wen-Hui; Jia, Shuai

    2015-08-25

    To explore the morphological evidence of immunoglobulin G (IgG) participating in intestinal mucosal immunity, 8 healthy adult Bactrian camels used. First, IgG was successfully isolated from their serum and rabbit antibody against Bactrian camels IgG was prepared. The IgG antibody secretory cells (ASCs) in small intestine were particularly observed through immumohistochemical staining, then after were analyzed by statistical methods. The results showed that the IgG ASCs were scattered in the lamina propria (LP) and some of them aggregated around of the intestinal glands. The IgG ASCs density was the highest from middle segment of duodenum to middle segment of jejunum, and then in ended segment of jejunum and initial segment of ileum, the lowest was in initial segment of duodenum, in middle and ended segment of ileum. It was demonstrated that the IgG ASCs mainly scattered in the effector sites of the mucosal immunity, though the density of IgG ASCs was different in different segment of small intestine. Moreover, this scatted distribution characteristic would provide a morphology basis for research whether IgG form a full-protection and immune surveillance in mucosal immunity homeostasis of integral intestine.

  4. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jinzhong; Aristophanous, Michalis, E-mail: MAristophanous@mdanderson.org; Beadle, Beth M.

    2015-09-15

    Purpose: To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Methods: Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to themore » planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation–maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the “ground truth” for quantitative evaluation. Results: The median multichannel segmented GTV of the primary tumor was 15.7 cm{sup 3} (range, 6.6–44.3 cm{sup 3}), while the PET segmented GTV was 10.2 cm{sup 3} (range, 2.8–45.1 cm{sup 3}). The median physician-defined GTV was 22.1 cm{sup 3} (range, 4.2–38.4 cm{sup 3}). The median difference between the multichannel segmented and physician-defined GTVs was −10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was −19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55–0.84), and the median sensitivity and positive predictive value between them were 0.76 and 0.81, respectively. Conclusions: The authors developed an automated multimodality segmentation algorithm for tumor volume delineation and validated this algorithm for head and neck cancer radiotherapy. The multichannel segmented GTV agreed well with the physician-defined GTV. The authors expect that their algorithm will improve the accuracy and consistency in target definition for radiotherapy.« less

  5. Cardiac magnetic resonance analysis of right ventricular function: comparison of quantification in the short-axis and 4-chamber planes.

    PubMed

    Souto Bayarri, M; Masip Capdevila, L; Remuiñan Pereira, C; Suárez-Cuenca, J J; Martínez Monzonís, A; Couto Pérez, M I; Carreira Villamor, J M

    2015-01-01

    To compare the methods of right ventricle segmentation in the short-axis and 4-chamber planes in cardiac magnetic resonance imaging and to correlate the findings with those of the tricuspid annular plane systolic excursion (TAPSE) method in echocardiography. We used a 1.5T MRI scanner to study 26 patients with diverse cardiovascular diseases. In all MRI studies, we obtained cine-mode images from the base to the apex in both the short-axis and 4-chamber planes using steady-state free precession sequences and 6mm thick slices. In all patients, we quantified the end-diastolic volume, end-systolic volume, and the ejection fraction of the right ventricle. On the same day as the cardiac magnetic resonance imaging study, 14 patients also underwent echocardiography with TAPSE calculation of right ventricular function. No statistically significant differences were found in the volumes and function of the right ventricle calculated using the 2 segmentation methods. The correlation between the volume estimations by the two segmentation methods was excellent (r=0,95); the correlation for the ejection fraction was slightly lower (r=0,8). The correlation between the cardiac magnetic resonance imaging estimate of right ventricular ejection fraction and TAPSE was very low (r=0,2, P<.01). Both ventricular segmentation methods quantify right ventricular function adequately. The correlation with the echocardiographic method is low. Copyright © 2012 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  6. Automated segmentation of foveal avascular zone in fundus fluorescein angiography.

    PubMed

    Zheng, Yalin; Gandhi, Jagdeep Singh; Stangos, Alexandros N; Campa, Claudio; Broadbent, Deborah M; Harding, Simon P

    2010-07-01

    PURPOSE. To describe and evaluate the performance of a computerized automated segmentation technique for use in quantification of the foveal avascular zone (FAZ). METHODS. A computerized technique for automated segmentation of the FAZ using images from fundus fluorescein angiography (FFA) was applied to 26 transit-phase images obtained from patients with various grades of diabetic retinopathy. The area containing the FAZ zone was first extracted from the original image and smoothed by a Gaussian kernel (sigma = 1.5). An initializing contour was manually placed inside the FAZ of the smoothed image and iteratively moved by the segmentation program toward the FAZ boundary. Five tests with different initializing curves were run on each of 26 images to assess reproducibility. The accuracy of the program was also validated by comparing results obtained by the program with the FAZ boundaries manually delineated by medical retina specialists. Interobserver performance was then evaluated by comparing delineations from two of the experts. RESULTS. One-way analysis of variance indicated that the disparities between different tests were not statistically significant, signifying excellent reproducibility for the computer program. There was a statistically significant linear correlation between the results obtained by automation and manual delineations by experts. CONCLUSIONS. This automated segmentation program can produce highly reproducible results that are comparable to those made by clinical experts. It has the potential to assist in the detection and management of foveal ischemia and to be integrated into automated grading systems.

  7. Quasi-experimental Studies in the Fields of Infection Control and Antibiotic Resistance, Ten Years Later: A Systematic Review.

    PubMed

    Alsaggaf, Rotana; O'Hara, Lyndsay M; Stafford, Kristen A; Leekha, Surbhi; Harris, Anthony D

    2018-02-01

    OBJECTIVE A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons. CONCLUSIONS While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions. Infect Control Hosp Epidemiol 2018;39:170-176.

  8. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less

  9. Global-to-local, shape-based, real and virtual landmarks for shape modeling by recursive boundary subdivision

    NASA Astrophysics Data System (ADS)

    Rueda, Sylvia; Udupa, Jayaram K.

    2011-03-01

    Landmark based statistical object modeling techniques, such as Active Shape Model (ASM), have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges such as (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 2D version starts from two initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a line determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more pairs of points, the lines defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing lines are left to be considered that indicate (as per method β) that a point can be selected on the associated segment. Examples of α and β are presented based on (a) distance; (b) Principal Component Analysis (PCA); and (c) the novel concept of virtual landmarks.

  10. Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning.

    PubMed

    Xu, Zhoubing; Burke, Ryan P; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A

    2015-08-01

    Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action

    PubMed Central

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter

    2018-01-01

    Introduction Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However—due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. Material and methods In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Results Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Discussion Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works. PMID:29746490

  12. Semi-automatic tracking, smoothing and segmentation of hyoid bone motion from videofluoroscopic swallowing study.

    PubMed

    Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong

    2017-01-01

    Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.

  13. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  14. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  15. Statistical shape (ASM) and appearance (AAM) models for the segmentation of the cerebellum in fetal ultrasound

    NASA Astrophysics Data System (ADS)

    Reyes López, Misael; Arámbula Cosío, Fernando

    2017-11-01

    The cerebellum is an important structure to determine the gestational age of the fetus, moreover most of the abnormalities it presents are related to growth disorders. In this work, we present the results of the segmentation of the fetal cerebellum applying statistical shape and appearance models. Both models were tested on ultrasound images of the fetal brain taken from 23 pregnant women, between 18 and 24 gestational weeks. The accuracy results obtained on 11 ultrasound images show a mean Hausdorff distance of 6.08 mm between the manual segmentation and the segmentation using active shape model, and a mean Hausdorff distance of 7.54 mm between the manual segmentation and the segmentation using active appearance model. The reported results demonstrate that the active shape model is more robust in the segmentation of the fetal cerebellum in ultrasound images.

  16. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    PubMed Central

    Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-01-01

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images. PMID:24989402

  17. Segmentation methods for breast vasculature in dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Lee, Hyo Min; Singh, Tanushriya; Maidment, Andrew D. A.

    2015-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) uses an iodinated contrast agent to image the three-dimensional breast vasculature. The University of Pennsylvania has an ongoing DE CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 post-contrast). DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. Temporal subtraction of the post-contrast DE images from the pre-contrast DE image is performed to analyze iodine uptake. Our previous work investigated image registration methods to correct for patient motion, enhancing the evaluation of vascular kinetics. In this project we investigate a segmentation algorithm which identifies blood vessels in the breast from our temporal DE subtraction images. Anisotropic diffusion filtering, Gabor filtering, and morphological filtering are used for the enhancement of vessel features. Vessel labeling methods are then used to distinguish vessel and background features successfully. Statistical and clinical evaluations of segmentation accuracy in DE-CBT images are ongoing.

  18. Brain tissues volume measurements from 2D MRI using parametric approach

    NASA Astrophysics Data System (ADS)

    L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.

    2018-04-01

    The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.

  19. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning.

    PubMed

    Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-07-01

    Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.

  20. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  1. Sequential Monte Carlo tracking of the marginal artery by multiple cue fusion and random forest regression.

    PubMed

    Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M

    2015-01-01

    Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.

  2. Alluvial substrate mapping by automated texture segmentation of recreational-grade side scan sonar imagery

    PubMed Central

    Buscombe, Daniel; Wheaton, Joseph M.

    2018-01-01

    Side scan sonar in low-cost ‘fishfinder’ systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar. PMID:29538449

  3. Listening through Voices: Infant Statistical Word Segmentation across Multiple Speakers

    ERIC Educational Resources Information Center

    Graf Estes, Katharine; Lew-Williams, Casey

    2015-01-01

    To learn from their environments, infants must detect structure behind pervasive variation. This presents substantial and largely untested learning challenges in early language acquisition. The current experiments address whether infants can use statistical learning mechanisms to segment words when the speech signal contains acoustic variation…

  4. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    PubMed

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.

  5. 3D variational brain tumor segmentation on a clustered feature set

    NASA Astrophysics Data System (ADS)

    Popuri, Karteek; Cobzas, Dana; Jagersand, Martin; Shah, Sirish L.; Murtha, Albert

    2009-02-01

    Tumor segmentation from MRI data is a particularly challenging and time consuming task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. Our work addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multi-dimensional feature set. Further, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this paper is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to the previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned inside and outside region voxel probabilities in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance, during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters in the ventricles to be in the tumor and hence better disambiguate the tumor from brain tissue. We show the performance of our method on real MRI scans. The experimental dataset includes MRI scans, from patients with difficult instances, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Our method shows good results on these test cases.

  6. Segment-Wise Genome-Wide Association Analysis Identifies a Candidate Region Associated with Schizophrenia in Three Independent Samples

    PubMed Central

    Rietschel, Marcella; Mattheisen, Manuel; Breuer, René; Schulze, Thomas G.; Nöthen, Markus M.; Levinson, Douglas; Shi, Jianxin; Gejman, Pablo V.; Cichon, Sven; Ophoff, Roel A.

    2012-01-01

    Recent studies suggest that variation in complex disorders (e.g., schizophrenia) is explained by a large number of genetic variants with small effect size (Odds Ratio∼1.05–1.1). The statistical power to detect these genetic variants in Genome Wide Association (GWA) studies with large numbers of cases and controls (∼15,000) is still low. As it will be difficult to further increase sample size, we decided to explore an alternative method for analyzing GWA data in a study of schizophrenia, dramatically reducing the number of statistical tests. The underlying hypothesis was that at least some of the genetic variants related to a common outcome are collocated in segments of chromosomes at a wider scale than single genes. Our approach was therefore to study the association between relatively large segments of DNA and disease status. An association test was performed for each SNP and the number of nominally significant tests in a segment was counted. We then performed a permutation-based binomial test to determine whether this region contained significantly more nominally significant SNPs than expected under the null hypothesis of no association, taking linkage into account. Genome Wide Association data of three independent schizophrenia case/control cohorts with European ancestry (Dutch, German, and US) using segments of DNA with variable length (2 to 32 Mbp) was analyzed. Using this approach we identified a region at chromosome 5q23.3-q31.3 (128–160 Mbp) that was significantly enriched with nominally associated SNPs in three independent case-control samples. We conclude that considering relatively wide segments of chromosomes may reveal reliable relationships between the genome and schizophrenia, suggesting novel methodological possibilities as well as raising theoretical questions. PMID:22723893

  7. Quality of Radiomic Features in Glioblastoma Multiforme: Impact of Semi-Automated Tumor Segmentation Software

    PubMed Central

    Lee, Myungeun; Woo, Boyeong; Kuo, Michael D.; Jamshidi, Neema

    2017-01-01

    Objective The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. Materials and Methods MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Results Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. Conclusion The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics. PMID:28458602

  8. Anterior segment optical coherence tomography evaluation of corneal epithelium healing time after 2 different surface ablation methods

    PubMed Central

    Eliaçik, Mustafa; Bayramlar, Hüseyin; Erdur, Sevil K.; Karabela, Yunus; Demirci, Göktuğ; Gülkilik, İbrahim G.; Özsütçü, Mustafa

    2015-01-01

    Objectives: To compare epithelial healing time following laser epithelial keratomileusis (LASEK) and photorefractive keratectomy (PRK) with anterior segment optic coherence tomography (AS-OCT). Methods: This prospective interventional case series study comprised 56 eyes of 28 patients that underwent laser refractive surgery in the Department of Ophthalmology, Medipol University Medical Faculty, Istanbul, Turkey, between March 2014 and May 2014. Each patient was randomized to have one eye operated on with PRK, and the other with LASEK. Patients were examined daily for 5 days, and epithelial healing time was assessed by using AS-OCT without removing therapeutic contact lens (TCL). Average discomfort scores were calculated from ratings obtained from questions regarding pain, photophobia, and lacrimation according to a scale of 0 (none) to 5. Results: The mean re-epithelialization time assessed with AS-OCT was 3.07±0.64 days in the PRK group, 3.55±0.54 days in the LASEK group, and the difference was statistically significant (p=0.03). Mean subjective discomfort score was 4.42±0.50 in the PRK eyes, and 2.85±0.44 in the LASEK eyes on the first exam day (p=0.001). The score obtained on the second (p=0.024), and third day (p=0.03) were also statistically significant. The fourth (p=0.069), and fifth days scores (p=0.1) showed no statistically significant difference between groups. Conclusion: The PRK showed a statistically significant shorter epithelial healing time, but had a statistically significant higher discomfort score until the postoperative fourth day compared with LASEK. PMID:25630007

  9. A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.

    PubMed

    Calapez, Alexandre; Rosa, Agostinho

    2010-09-01

    Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.

  10. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    PubMed

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  11. Automated Identification and Characterization of Secondary & Tertiary gamma’ Precipitates in Nickel-Based Superalloys (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    and intensity information from the EFTEM images. The microstructural statistics obtained from the segmented γ’ precipitates agreed with those of the...is its ability to automate segmentation of precipitates in a reproducible manner for acquiring microstructural statistics that relate to both...were identified using a combination of visual inspection and intensity information from the EFTEM images. The microstructural statistics obtained

  12. Detection of Single Standing Dead Trees from Aerial Color Infrared Imagery by Segmentation with Shape and Intensity Priors

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.

    2015-03-01

    Standing dead trees, known as snags, are an essential factor in maintaining biodiversity in forest ecosystems. Combined with their role as carbon sinks, this makes for a compelling reason to study their spatial distribution. This paper presents an integrated method to detect and delineate individual dead tree crowns from color infrared aerial imagery. Our approach consists of two steps which incorporate statistical information about prior distributions of both the image intensities and the shapes of the target objects. In the first step, we perform a Gaussian Mixture Model clustering in the pixel color space with priors on the cluster means, obtaining up to 3 components corresponding to dead trees, living trees, and shadows. We then refine the dead tree regions using a level set segmentation method enriched with a generative model of the dead trees' shape distribution as well as a discriminative model of their pixel intensity distribution. The iterative application of the statistical shape template yields the set of delineated dead crowns. The prior information enforces the consistency of the template's shape variation with the shape manifold defined by manually labeled training examples, which makes it possible to separate crowns located in close proximity and prevents the formation of large crown clusters. Also, the statistical information built into the segmentation gives rise to an implicit detection scheme, because the shape template evolves towards an empty contour if not enough evidence for the object is present in the image. We test our method on 3 sample plots from the Bavarian Forest National Park with reference data obtained by manually marking individual dead tree polygons in the images. Our results are scenario-dependent and range from a correctness/completeness of 0.71/0.81 up to 0.77/1, with an average center-of-gravity displacement of 3-5 pixels between the detected and reference polygons.

  13. The Metamorphosis of the Statistical Segmentation Output: Lexicalization during Artificial Language Learning

    ERIC Educational Resources Information Center

    Fernandes, Tania; Kolinsky, Regine; Ventura, Paulo

    2009-01-01

    This study combined artificial language learning (ALL) with conventional experimental techniques to test whether statistical speech segmentation outputs are integrated into adult listeners' mental lexicon. Lexicalization was assessed through inhibitory effects of novel neighbors (created by the parsing process) on auditory lexical decisions to…

  14. Hybrid Method of Transvertebral Foraminotomy Combined with Anterior Cervical Decompression and Fusion for Multilevel Cervical Disease.

    PubMed

    Yamamoto, Yu; Hara, Masahito; Nishimura, Yusuke; Haimoto, Shoichi; Wakabayashi, Toshihiko

    2018-03-15

    Transvertebral foraminotomy (TVF) combined with anterior cervical decompression and fusion (ACDF) can be used to treat multilevel cervical spondylotic myelopathy and radiculopathy; however, the radiological outcomes and effectiveness of this hybrid procedure are unknown. We retrospectively assessed 22 consecutive patients treated with combined TVF and ACDF between January 2007 and May 2016. The Japanese Orthopedic Association (JOA) score and Odom's criteria were analyzed. Radiological assessment included the C2-7 sagittal Cobb angle (CA) and range of motion (ROM). The tilting angle (TA), TA ROM, and disc height (DH) of segments adjacent to the ACDF were also measured. Adjacent segment degeneration, which includes disc degeneration, was evaluated. The mean postoperative follow-up was 41.7 months. All surgeries were performed at two adjacent segments, with ACDF and TVF of the upper and lower segments, respectively. The JOA scores significantly improved. There were no significant differences in the C2-7 CA, C2-7 ROM, TA, and TA ROM, but there was a statistically significant decrease in DH of the lower adjacent segment to ACDF. Progression of disc degeneration was identified in two patients, with no progression in the criterion of adjacent segment degeneration over the follow-up. The TVF combined with ACDF produced excellent clinical results and maintained spinal alignment, albeit with a reduction in DH. TVF was safely performed at the lower segment adjacent to the ACDF, although this might result in earlier degeneration. In conclusion, this hybrid method is less invasive and beneficial for reduction of the number of fused levels.

  15. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    PubMed

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  16. Automated skin lesion segmentation with kernel density estimation

    NASA Astrophysics Data System (ADS)

    Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.

    2017-07-01

    Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.

  17. K-SPAN: A lexical database of Korean surface phonetic forms and phonological neighborhood density statistics.

    PubMed

    Holliday, Jeffrey J; Turnbull, Rory; Eychenne, Julien

    2017-10-01

    This article presents K-SPAN (Korean Surface Phonetics and Neighborhoods), a database of surface phonetic forms and several measures of phonological neighborhood density for 63,836 Korean words. Currently publicly available Korean corpora are limited by the fact that they only provide orthographic representations in Hangeul, which is problematic since phonetic forms in Korean cannot be reliably predicted from orthographic forms. We describe the method used to derive the surface phonetic forms from a publicly available orthographic corpus of Korean, and report on several statistics calculated using this database; namely, segment unigram frequencies, which are compared to previously reported results, along with segment-based and syllable-based neighborhood density statistics for three types of representation: an "orthographic" form, which is a quasi-phonological representation, a "conservative" form, which maintains all known contrasts, and a "modern" form, which represents the pronunciation of contemporary Seoul Korean. These representations are rendered in an ASCII-encoded scheme, which allows users to query the corpus without having to read Korean orthography, and permits the calculation of a wide range of phonological measures.

  18. Diurnal Alterations of Refraction, Anterior Segment Biometrics, and Intraocular Pressure in Long-Time Dehydration due to Religious Fasting.

    PubMed

    Baser, Gonen; Cengiz, Hakan; Uyar, Murat; Seker Un, Emine

    2016-01-01

    To investigate the effects of dehydration due to fasting on diurnal changes of intraocular pressure, anterior segment biometrics, and refraction. The intraocular pressures, anterior segment biometrics (axial length: AL; Central corneal thickness: CCT; Lens thickness: LT; Anterior chamber depth: ACD), and refractive measurements of 30 eyes of 15 fasting healthy male volunteers were recorded at 8:00 in the morning and 17:00 in the evening in the Ramadan of 2013 and two months later. The results were compared and the statistical analyses were performed using the Rstudio software version 0.98.501. The variables were investigated using visual (histograms, probability plots) and analytical methods (Kolmogorov-Smirnov/Shapiro-Wilk test) to determine whether or not they were normally distributed. The refractive values remained stable in the fasting as well as in the control period (p = 0.384). The axial length measured slightly shorter in the fasting period (p = 0.001). The corneal thickness presented a diurnal variation, in which the cornea measured thinner in the evening. The difference between the fasting and control period was not statistically significant (p = 0.359). The major differences were observed in the anterior chamber depth and IOP. The ACD was shallower in the evening during the fasting period, where it was deeper in the control period. The diurnal IOP difference was greater in the fasting period than the control period. Both were statistically significant (p = 0.001). The LT remained unchanged in both periods. The major difference was shown in the anterior chamber shallowing in the evening hours and IOP. Our study contributes the hypothesis that the posterior segment of the eye is more responsible for the axial length alterations and normovolemia has a more dominant influence on diurnal IOP changes.

  19. An Efficient Pipeline for Abdomen Segmentation in CT Images.

    PubMed

    Koyuncu, Hasan; Ceylan, Rahime; Sivri, Mesut; Erdogan, Hasan

    2018-04-01

    Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.

  20. Changes in electrocardiographic findings after closed thoracostomy in patients with spontaneous pneumothorax

    PubMed Central

    Lee, Wonjae; Lee, Yoonje; Kim, Changsun; Choi, Hyuk Joong; Kang, Bossng; Lim, Tae Ho; Oh, Jaehoon; Kang, Hyunggoo; Shin, Junghun

    2017-01-01

    Objective We aimed to describe electrocardiographic (ECG) findings in spontaneous pneumothorax patients before and after closed thoracostomy. Methods This is a retrospective study which included patients with spontaneous pneumothorax who presented to an emergency department of a tertiary urban hospital from February 2005 to March 2015. The primary outcome was a difference in ECG findings between before and after closed thoracostomy. We specifically investigated the following ECG elements: PR, QRS, QTc, axis, ST segments, and R waves in each lead. The secondary outcomes were change in ST segment in any lead and change in axis after closed thoracostomy. Results There were two ECG elements which showed statistically significant difference after thoracostomy. With right pneumothorax volume of greater than 80%, QTc and the R waves in aVF and V5 significantly changed after thoracostomy. With left pneumothorax volume between 31% and 80%, the ST segment in V2 and the R wave in V1 significantly changed after thoracostomy. However, majority of ECG elements did not show statistically significant alteration after thoracostomy. Conclusion We found only minor changes in ECG after closed thoracostomy in spontaneous pneumothorax patients. PMID:28435901

  1. The influence of distal-end heat treatment on deflection of nickel-titanium archwire

    PubMed Central

    da Silva, Marcelo Faria; Pinzan-Vercelino, Célia Regina Maia; Gurgel, Júlio de Araújo

    2016-01-01

    Objective: The aim of this in vitro study was to evaluate the deflection-force behavior of nickel-titanium (NiTi) orthodontic wires adjacent to the portion submitted to heat treatment. Material and Methods: A total of 106 segments of NiTi wires (0.019 x 0.025-in) and heat-activated NiTi wires (0.016 x 0.022-in) from four commercial brands were tested. The segments were obtained from 80 archwires. For the experimental group, the distal portion of each segmented archwire was subjected to heat treatment (n = 40), while the other distal portion of the same archwire was used as a heating-free control group (n = 40). Deflection tests were performed in a temperature-controlled universal testing machine. Unpaired Student's t-tests were applied to determine if there were differences between the experimental and control groups for each commercial brand and size of wire. Statistical significance was set at p < 0.05. Results: There were no statistically significant differences between the tested groups with the same size and brand of wire. Conclusions: Heat treatment applied to the distal ends of rectangular NiTi archwires does not permanently change the elastic properties of the adjacent portions. PMID:27007766

  2. Universal partitioning of the hierarchical fold network of 50-residue segments in proteins

    PubMed Central

    Ito, Jun-ichi; Sonobe, Yuki; Ikeda, Kazuyoshi; Tomii, Kentaro; Higo, Junichi

    2009-01-01

    Background Several studies have demonstrated that protein fold space is structured hierarchically and that power-law statistics are satisfied in relation between the numbers of protein families and protein folds (or superfamilies). We examined the internal structure and statistics in the fold space of 50 amino-acid residue segments taken from various protein folds. We used inter-residue contact patterns to measure the tertiary structural similarity among segments. Using this similarity measure, the segments were classified into a number (Kc) of clusters. We examined various Kc values for the clustering. The special resolution to differentiate the segment tertiary structures increases with increasing Kc. Furthermore, we constructed networks by linking structurally similar clusters. Results The network was partitioned persistently into four regions for Kc ≥ 1000. This main partitioning is consistent with results of earlier studies, where similar partitioning was reported in classifying protein domain structures. Furthermore, the network was partitioned naturally into several dozens of sub-networks (i.e., communities). Therefore, intra-sub-network clusters were mutually connected with numerous links, although inter-sub-network ones were rarely done with few links. For Kc ≥ 1000, the major sub-networks were about 40; the contents of the major sub-networks were conserved. This sub-partitioning is a novel finding, suggesting that the network is structured hierarchically: Segments construct a cluster, clusters form a sub-network, and sub-networks constitute a region. Additionally, the network was characterized by non-power-law statistics, which is also a novel finding. Conclusion Main findings are: (1) The universe of 50 residue segments found here was characterized by non-power-law statistics. Therefore, the universe differs from those ever reported for the protein domains. (2) The 50-residue segments were partitioned persistently and universally into some dozens (ca. 40) of major sub-networks, irrespective of the number of clusters. (3) These major sub-networks encompassed 90% of all segments. Consequently, the protein tertiary structure is constructed using the dozens of elements (sub-networks). PMID:19454039

  3. TU-A-9A-06: Semi-Automatic Segmentation of Skin Cancer in High-Frequency Ultrasound Images: Initial Comparison with Histology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Y; Li, X; Fishman, K

    Purpose: In skin-cancer radiotherapy, the assessment of skin lesion is challenging, particularly with important features such as the depth and width hard to determine. The aim of this study is to develop interative segmentation method to delineate tumor boundary using high-frequency ultrasound images and to correlate the segmentation results with the histopathological tumor dimensions. Methods: We analyzed 6 patients who comprised a total of 10 skin lesions involving the face, scalp, and hand. The patient’s various skin lesions were scanned using a high-frequency ultrasound system (Episcan, LONGPORT, INC., PA, U.S.A), with a 30-MHz single-element transducer. The lateral resolution was 14.6more » micron and the axial resolution was 3.85 micron for the ultrasound image. Semiautomatic image segmentation was performed to extract the cancer region, using a robust statistics driven active contour algorithm. The corresponding histology images were also obtained after tumor resection and served as the reference standards in this study. Results: Eight out of the 10 lesions are successfully segmented. The ultrasound tumor delineation correlates well with the histology assessment, in all the measurements such as depth, size, and shape. The depths measured by the ultrasound have an average of 9.3% difference comparing with that in the histology images. The remaining 2 cases suffered from the situation of mismatching between pathology and ultrasound images. Conclusion: High-frequency ultrasound is a noninvasive, accurate and easy-accessible modality to image skin cancer. Our segmentation method, combined with high-frequency ultrasound technology, provides a promising tool to estimate the extent of the tumor to guide the radiotherapy procedure and monitor treatment response.« less

  4. Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

    PubMed Central

    Dunn, William D.; Aerts, Hugo J.W.L.; Cooper, Lee A.; Holder, Chad A.; Hwang, Scott N.; Jaffe, Carle C.; Brat, Daniel J.; Jain, Rajan; Flanders, Adam E.; Zinn, Pascal O.; Colen, Rivka R.; Gutman, David A.

    2017-01-01

    Background Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman’s r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses. PMID:29600296

  5. A self-adaptive algorithm for traffic sign detection in motion image based on color and shape features

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Gong, Zhijun; Ye, Chun; Li, Yongqiang; Liang, Cheng

    2007-06-01

    As an important sub-system in intelligent transportation system (ITS), the detection and recognition of traffic signs from mobile images is becoming one of the hot spots in the international research field of ITS. Considering the problem of traffic sign automatic detection in motion images, a new self-adaptive algorithm for traffic sign detection based on color and shape features is proposed in this paper. Firstly, global statistical color features of different images are computed based on statistics theory. Secondly, some self-adaptive thresholds and special segmentation rules for image segmentation are designed according to these global color features. Then, for red, yellow and blue traffic signs, the color image is segmented to three binary images by these thresholds and rules. Thirdly, if the number of white pixels in the segmented binary image exceeds the filtering threshold, the binary image should be further filtered. Fourthly, the method of gray-value projection is used to confirm top, bottom, left and right boundaries for candidate regions of traffic signs in the segmented binary image. Lastly, if the shape feature of candidate region satisfies the need of real traffic sign, this candidate region is confirmed as the detected traffic sign region. The new algorithm is applied to actual motion images of natural scenes taken by a CCD camera of the mobile photogrammetry system in Nanjing at different time. The experimental results show that the algorithm is not only simple, robust and more adaptive to natural scene images, but also reliable and high-speed on real traffic sign detection.

  6. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT.

    PubMed

    Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario

    2017-06-01

    The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.

  7. Statistical evaluation of manual segmentation of a diffuse low-grade glioma MRI dataset.

    PubMed

    Ben Abdallah, Meriem; Blonski, Marie; Wantz-Mezieres, Sophie; Gaudeau, Yann; Taillandier, Luc; Moureaux, Jean-Marie

    2016-08-01

    Software-based manual segmentation is critical to the supervision of diffuse low-grade glioma patients and to the optimal treatment's choice. However, manual segmentation being time-consuming, it is difficult to include it in the clinical routine. An alternative to circumvent the time cost of manual segmentation could be to share the task among different practitioners, providing it can be reproduced. The goal of our work is to assess diffuse low-grade gliomas' manual segmentation's reproducibility on MRI scans, with regard to practitioners, their experience and field of expertise. A panel of 13 experts manually segmented 12 diffuse low-grade glioma clinical MRI datasets using the OSIRIX software. A statistical analysis gave promising results, as the practitioner factor, the medical specialty and the years of experience seem to have no significant impact on the average values of the tumor volume variable.

  8. Semi-automated brain tumor segmentation on multi-parametric MRI using regularized non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van

    2017-05-04

    Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.

  9. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    PubMed

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. The vision guidance and image processing of AGV

    NASA Astrophysics Data System (ADS)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  11. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yanrong; Shao, Yeqin; Gao, Yaozong

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integratemore » the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.« less

  12. A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling.

    PubMed

    Cordier, Nicolas; Delingette, Herve; Ayache, Nicholas

    2016-04-01

    In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.

  13. The use of the Kalman filter in the automated segmentation of EIT lung images.

    PubMed

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  14. Segmentation of prostate biopsy needles in transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Krefting, Dagmar; Haupt, Barbara; Tolxdorff, Thomas; Kempkensteffen, Carsten; Miller, Kurt

    2007-03-01

    Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent feature model is built by supervised learning using a set of manually segmented needles. The feature space is spanned by common binary object features as size and eccentricity as well as imaging-system dependent features like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker lines are removed from the images. The segmentation itself is realized by scale-invariant classification using maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for biopsy needle localization in clinical prostate biopsy TRUS images.

  15. Why Segmentation Matters: Experience-Driven Segmentation Errors Impair "Morpheme" Learning

    ERIC Educational Resources Information Center

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner's knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners' ability to segment words into their component morphemes and learn phonologically…

  16. Evaluation of a Three-Dimensional Stereophotogrammetric Method to Identify and Measure the Palatal Surface Area in Children With Unilateral Cleft Lip and Palate.

    PubMed

    De Menezes, Marcio; Cerón-Zapata, Ana Maria; López-Palacio, Ana Maria; Mapelli, Andrea; Pisoni, Luca; Sforza, Chiarella

    2016-01-01

    To assess a three-dimensional (3D) stereophotogrammetric method for area delimitation and evaluation of the dental arches of children with unilateral cleft lip and palate (UCLP). Obtained data were also used to assess the 3D changes occurring in the maxillary arch with the use of orthopedic therapy prior to rhinocheiloplasty and before the first year of life. Within the collaboration between the Università degli Studi di Milano (Italy) and the University CES of Medellin (Colombia), 96 palatal cast models obtained from neonatal patients with UCLP were analyzed using a 3D stereophotogrammetric imaging system. The area of the minor and greater cleft segments on the digital dental cast surface were delineated by the visualization tool of the stereophotogrammetric software and then examined. "Trueness" of the measurements, as well as systematic and random errors between operators' tracings ("precision") were calculated. The method gave area measurements close to true values (errors lower than 2%), without systematic measurement errors for tracings by both interoperators and intraoperators (P > .05). Statistically significant differences (P < .05) were noted for alveolar segment and time. Maxillary segments have the potential for growth during presurgical orthopedic treatment in the early neonatal period. The cleft segment delimitation on digital dental casts and area measurements by the 3D stereophotogrammetric system revealed an accurate (true and precise) method for evaluating the stone casts of newborn patients with UCLP.

  17. Spatial context learning approach to automatic segmentation of pleural effusion in chest computed tomography images

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Casas, Rafael; Linguraru, Marius G.

    2016-03-01

    Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.

  18. Ureter tracking and segmentation in CT urography (CTU) using COMPASS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadjiiski, Lubomir, E-mail: lhadjisk@umich.edu; Zick, David; Chan, Heang-Ping

    2014-12-15

    Purpose: The authors are developing a computerized system for automated segmentation of ureters in CTU, referred to as combined model-guided path-finding analysis and segmentation system (COMPASS). Ureter segmentation is a critical component for computer-aided diagnosis of ureter cancer. Methods: COMPASS consists of three stages: (1) rule-based adaptive thresholding and region growing, (2) path-finding and propagation, and (3) edge profile extraction and feature analysis. With institutional review board approval, 79 CTU scans performed with intravenous (IV) contrast material enhancement were collected retrospectively from 79 patient files. One hundred twenty-four ureters were selected from the 79 CTU volumes. On average, the uretersmore » spanned 283 computed tomography slices (range: 116–399, median: 301). More than half of the ureters contained malignant or benign lesions and some had ureter wall thickening due to malignancy. A starting point for each of the 124 ureters was identified manually to initialize the tracking by COMPASS. In addition, the centerline of each ureter was manually marked and used as reference standard for evaluation of tracking performance. The performance of COMPASS was quantitatively assessed by estimating the percentage of the length that was successfully tracked and segmented for each ureter and by estimating the average distance and the average maximum distance between the computer and the manually tracked centerlines. Results: Of the 124 ureters, 120 (97%) were segmented completely (100%), 121 (98%) were segmented through at least 70%, and 123 (99%) were segmented through at least 50% of its length. In comparison, using our previous method, 85 (69%) ureters were segmented completely (100%), 100 (81%) were segmented through at least 70%, and 107 (86%) were segmented at least 50% of its length. With COMPASS, the average distance between the computer and the manually generated centerlines is 0.54 mm, and the average maximum distance is 2.02 mm. With our previous method, the average distance between the centerlines was 0.80 mm, and the average maximum distance was 3.38 mm. The improvements in the ureteral tracking length and both distance measures were statistically significant (p < 0.0001). Conclusions: COMPASS improved significantly the ureter tracking, including regions across ureter lesions, wall thickening, and the narrowing of the lumen.« less

  19. Estimation of stature from the foot and its segments in a sub-adult female population of North India

    PubMed Central

    2011-01-01

    Background Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. Methods The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. Results The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. Conclusions The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults. PMID:22104433

  20. Radiographic Response to Yttrium-90 Radioembolization in Anterior Versus Posterior Liver Segments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Saad M.; Lewandowski, Robert J.; Ryu, Robert K.

    2008-11-15

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria.more » Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm{sup 3}, respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.« less

  1. Radiographic response to yttrium-90 radioembolization in anterior versus posterior liver segments.

    PubMed

    Ibrahim, Saad M; Lewandowski, Robert J; Ryu, Robert K; Sato, Kent T; Gates, Vanessa L; Mulcahy, Mary F; Kulik, Laura; Larson, Andrew C; Omary, Reed A; Salem, Riad

    2008-01-01

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria. Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm(3), respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.

  2. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  3. Automatic 2D and 3D segmentation of liver from Computerised Tomography

    NASA Astrophysics Data System (ADS)

    Evans, Alun

    As part of the diagnosis of liver disease, a Computerised Tomography (CT) scan is taken of the patient, which the clinician then uses for assistance in determining the presence and extent of the disease. This thesis presents the background, methodology, results and future work of a project that employs automated methods to segment liver tissue. The clinical motivation behind this work is the desire to facilitate the diagnosis of liver disease such as cirrhosis or cancer, assist in volume determination for liver transplantation, and possibly assist in measuring the effect of any treatment given to the liver. Previous attempts at automatic segmentation of liver tissue have relied on 2D, low-level segmentation techniques, such as thresholding and mathematical morphology, to obtain the basic liver structure. The derived boundary can then be smoothed or refined using more advanced methods. The 2D results presented in this thesis improve greatly on this previous work by using a topology adaptive active contour model to accurately segment liver tissue from CT images. The use of conventional snakes for liver segmentation is difficult due to the presence of other organs closely surrounding the liver this new technique avoids this problem by adding an inflationary force to the basic snake equation, and initialising the snake inside the liver. The concepts underlying the 2D technique are extended to 3D, and results of full 3D segmentation of the liver are presented. The 3D technique makes use of an inflationary active surface model which is adaptively reparameterised, according to its size and local curvature, in order that it may more accurately segment the organ. Statistical analysis of the accuracy of the segmentation is presented for 18 healthy liver datasets, and results of the segmentation of unhealthy livers are also shown. The novel work developed during the course of this project has possibilities for use in other areas of medical imaging research, for example the segmentation of internal liver structures, and the segmentation and classification of unhealthy tissue. The possibilities of this future work are discussed towards the end of the report.

  4. Determination Of The Activity Space By The Stereometric Method

    NASA Astrophysics Data System (ADS)

    Deloison, Y.; Crete, N.; Mollard, R.

    1980-07-01

    To determine the activity space of a sitting subject, it is necessary to go beyond the mere statistical description of morphology and the knowledge of the displacement volume. An anlysis of the positions or variations of the positions of the diverse segmental elements (arms, hands, lower limbs, etc...) in the course of a given activity is required. Of the various methods used to locate quickly and accurately the spatial positions of anatomical points, stereometry makes it possible to plot the three-dimensional coordinates of any point in space in relation to a fixed trirectangle frame of reference determined by the stereome-tric measuring device. Thus, regardless of the orientation and posture of the subject, his segmental elements can be easily pin-pointed, throughout the experiment, within the space they occupy. Using this method, it is possible for a sample of operators seated at an operation station and applying either manual controls or pedals and belonging to a population statistically defined from the data collected and the analyses produced by the anthropometric study to determine a contour line of reach capability marking out the usable working space and to know, within this working space, a contour line of preferential activity that is limited, in space, by the whole range of optimal reach capability of all the subjects.

  5. ASM Based Synthesis of Handwritten Arabic Text Pages

    PubMed Central

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  6. An effective approach of lesion segmentation within the breast ultrasound image based on the cellular automata principle.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong

    2012-10-01

    In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.

  7. ASM Based Synthesis of Handwritten Arabic Text Pages.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  8. The Surprising Power of Statistical Learning: When Fragment Knowledge Leads to False Memories of Unheard Words

    ERIC Educational Resources Information Center

    Endress, Ansgar D.; Mehler, Jacques

    2009-01-01

    Word-segmentation, that is, the extraction of words from fluent speech, is one of the first problems language learners have to master. It is generally believed that statistical processes, in particular those tracking "transitional probabilities" (TPs), are important to word-segmentation. However, there is evidence that word forms are stored in…

  9. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy.

    PubMed

    Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis

    2015-09-01

    To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the median sensitivity and positive predictive value between them were 0.76 and 0.81, respectively. The authors developed an automated multimodality segmentation algorithm for tumor volume delineation and validated this algorithm for head and neck cancer radiotherapy. The multichannel segmented GTV agreed well with the physician-defined GTV. The authors expect that their algorithm will improve the accuracy and consistency in target definition for radiotherapy.

  10. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.

    PubMed

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.

  11. [Study of beta-turns in globular proteins].

    PubMed

    Amirova, S R; Milchevskiĭ, Iu V; Filatov, I V; Esipova, N G; Tumanian, V G

    2005-01-01

    The formation of beta-turns in globular proteins has been studied by the method of molecular mechanics. Statistical method of discriminant analysis was applied to calculate energy components and sequences of oligopeptide segments, and after this prediction of I type beta-turns has been drawn. The accuracy of true positive prediction is 65%. Components of conformational energy considerably affecting beta-turn formation were delineated. There are torsional energy, energy of hydrogen bonds, and van der Waals energy.

  12. Comparison of contact conditions obtained by direct simulation with statistical analysis for normally distributed isotropic surfaces

    NASA Astrophysics Data System (ADS)

    Uchidate, M.

    2018-09-01

    In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.

  13. Classification of CT examinations for COPD visual severity analysis

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Zheng, Bin; Wang, Xingwei; Pu, Jiantao; Gur, David; Sciurba, Frank C.; Leader, J. Ken

    2012-03-01

    In this study we present a computational method of CT examination classification into visual assessed emphysema severity. The visual severity categories ranged from 0 to 5 and were rated by an experienced radiologist. The six categories were none, trace, mild, moderate, severe and very severe. Lung segmentation was performed for every input image and all image features are extracted from the segmented lung only. We adopted a two-level feature representation method for the classification. Five gray level distribution statistics, six gray level co-occurrence matrix (GLCM), and eleven gray level run-length (GLRL) features were computed for each CT image depicted segment lung. Then we used wavelets decomposition to obtain the low- and high-frequency components of the input image, and again extract from the lung region six GLCM features and eleven GLRL features. Therefore our feature vector length is 56. The CT examinations were classified using the support vector machine (SVM) and k-nearest neighbors (KNN) and the traditional threshold (density mask) approach. The SVM classifier had the highest classification performance of all the methods with an overall sensitivity of 54.4% and a 69.6% sensitivity to discriminate "no" and "trace visually assessed emphysema. We believe this work may lead to an automated, objective method to categorically classify emphysema severity on CT exam.

  14. ARCOCT: Automatic detection of lumen border in intravascular OCT images.

    PubMed

    Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos

    2017-11-01

    Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border detection in OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Multifractal modeling, segmentation, prediction, and statistical validation of posterior fossa tumors

    NASA Astrophysics Data System (ADS)

    Islam, Atiq; Iftekharuddin, Khan M.; Ogg, Robert J.; Laningham, Fred H.; Sivakumar, Bhuvaneswari

    2008-03-01

    In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.

  17. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    PubMed

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  18. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    NASA Astrophysics Data System (ADS)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  19. When Mommy Comes to the Rescue of Statistics: Infants Combine Top-Down and Bottom-Up Cues to Segment Speech

    ERIC Educational Resources Information Center

    Mersad, Karima; Nazzi, Thierry

    2012-01-01

    Transitional Probability (TP) computations are regarded as a powerful learning mechanism that is functional early in development and has been proposed as an initial bootstrapping device for speech segmentation. However, a recent study casts doubt on the robustness of early statistical word-learning. Johnson and Tyler (2010) showed that when…

  20. Linguistic Constraints on Statistical Word Segmentation: The Role of Consonants in Arabic and English

    ERIC Educational Resources Information Center

    Kastner, Itamar; Adriaans, Frans

    2018-01-01

    Statistical learning is often taken to lie at the heart of many cognitive tasks, including the acquisition of language. One particular task in which probabilistic models have achieved considerable success is the segmentation of speech into words. However, these models have mostly been tested against English data, and as a result little is known…

  1. Advances in segmentation modeling for health communication and social marketing campaigns.

    PubMed

    Albrecht, T L; Bryant, C

    1996-01-01

    Large-scale communication campaigns for health promotion and disease prevention involve analysis of audience demographic and psychographic factors for effective message targeting. A variety of segmentation modeling techniques, including tree-based methods such as Chi-squared Automatic Interaction Detection and logistic regression, are used to identify meaningful target groups within a large sample or population (N = 750-1,000+). Such groups are based on statistically significant combinations of factors (e.g., gender, marital status, and personality predispositions). The identification of groups or clusters facilitates message design in order to address the particular needs, attention patterns, and concerns of audience members within each group. We review current segmentation techniques, their contributions to conceptual development, and cost-effective decision making. Examples from a major study in which these strategies were used are provided from the Texas Women, Infants and Children Program's Comprehensive Social Marketing Program.

  2. Assessing Variability in Brain Tumor Segmentation to Improve Volumetric Accuracy and Characterization of Change.

    PubMed

    Rios Piedra, Edgar A; Taira, Ricky K; El-Saden, Suzie; Ellingson, Benjamin M; Bui, Alex A T; Hsu, William

    2016-02-01

    Brain tumor analysis is moving towards volumetric assessment of magnetic resonance imaging (MRI), providing a more precise description of disease progression to better inform clinical decision-making and treatment planning. While a multitude of segmentation approaches exist, inherent variability in the results of these algorithms may incorrectly indicate changes in tumor volume. In this work, we present a systematic approach to characterize variability in tumor boundaries that utilizes equivalence tests as a means to determine whether a tumor volume has significantly changed over time. To demonstrate these concepts, 32 MRI studies from 8 patients were segmented using four different approaches (statistical classifier, region-based, edge-based, knowledge-based) to generate different regions of interest representing tumor extent. We showed that across all studies, the average Dice coefficient for the superset of the different methods was 0.754 (95% confidence interval 0.701-0.808) when compared to a reference standard. We illustrate how variability obtained by different segmentations can be used to identify significant changes in tumor volume between sequential time points. Our study demonstrates that variability is an inherent part of interpreting tumor segmentation results and should be considered as part of the interpretation process.

  3. A Scalable Framework For Segmenting Magnetic Resonance Images

    PubMed Central

    Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar

    2009-01-01

    A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893

  4. Cross-visit tumor sub-segmentation and registration with outlier rejection for dynamic contrast-enhanced MRI time series data.

    PubMed

    Buonaccorsi, G A; Rose, C J; O'Connor, J P B; Roberts, C; Watson, Y; Jackson, A; Jayson, G C; Parker, G J M

    2010-01-01

    Clinical trials of anti-angiogenic and vascular-disrupting agents often use biomarkers derived from DCE-MRI, typically reporting whole-tumor summary statistics and so overlooking spatial parameter variations caused by tissue heterogeneity. We present a data-driven segmentation method comprising tracer-kinetic model-driven registration for motion correction, conversion from MR signal intensity to contrast agent concentration for cross-visit normalization, iterative principal components analysis for imputation of missing data and dimensionality reduction, and statistical outlier detection using the minimum covariance determinant to obtain a robust Mahalanobis distance. After applying these techniques we cluster in the principal components space using k-means. We present results from a clinical trial of a VEGF inhibitor, using time-series data selected because of problems due to motion and outlier time series. We obtained spatially-contiguous clusters that map to regions with distinct microvascular characteristics. This methodology has the potential to uncover localized effects in trials using DCE-MRI-based biomarkers.

  5. Subcellular object quantification with Squassh3C and SquasshAnalyst.

    PubMed

    Rizk, Aurélien; Mansouri, Maysam; Ballmer-Hofer, Kurt; Berger, Philipp

    2015-11-01

    Quantitative image analysis plays an important role in contemporary biomedical research. Squassh is a method for automatic detection, segmentation, and quantification of subcellular structures and analysis of their colocalization. Here we present the applications Squassh3C and SquasshAnalyst. Squassh3C extends the functionality of Squassh to three fluorescence channels and live-cell movie analysis. SquasshAnalyst is an interactive web interface for the analysis of Squassh3C object data. It provides segmentation image overview and data exploration, figure generation, object and image filtering, and a statistical significance test in an easy-to-use interface. The overall procedure combines the Squassh3C plug-in for the free biological image processing program ImageJ and a web application working in conjunction with the free statistical environment R, and it is compatible with Linux, MacOS X, or Microsoft Windows. Squassh3C and SquasshAnalyst are available for download at www.psi.ch/lbr/SquasshAnalystEN/SquasshAnalyst.zip.

  6. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    NASA Astrophysics Data System (ADS)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  7. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    PubMed

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart

    2015-02-21

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  8. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  9. Robust Skull-Stripping Segmentation Based on Irrational Mask for Magnetic Resonance Brain Images.

    PubMed

    Moldovanu, Simona; Moraru, Luminița; Biswas, Anjan

    2015-12-01

    This paper proposes a new method for simple, efficient, and robust removal of the non-brain tissues in MR images based on an irrational mask for filtration within a binary morphological operation framework. The proposed skull-stripping segmentation is based on two irrational 3 × 3 and 5 × 5 masks, having the sum of its weights equal to the transcendental number π value provided by the Gregory-Leibniz infinite series. It allows maintaining a lower rate of useful pixel loss. The proposed method has been tested in two ways. First, it has been validated as a binary method by comparing and contrasting with Otsu's, Sauvola's, Niblack's, and Bernsen's binary methods. Secondly, its accuracy has been verified against three state-of-the-art skull-stripping methods: the graph cuts method, the method based on Chan-Vese active contour model, and the simplex mesh and histogram analysis skull stripping. The performance of the proposed method has been assessed using the Dice scores, overlap and extra fractions, and sensitivity and specificity as statistical methods. The gold standard has been provided by two neurologist experts. The proposed method has been tested and validated on 26 image series which contain 216 images from two publicly available databases: the Whole Brain Atlas and the Internet Brain Segmentation Repository that include a highly variable sample population (with reference to age, sex, healthy/diseased). The approach performs accurately on both standardized databases. The main advantage of the proposed method is its robustness and speed.

  10. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  11. Quantitative analysis of retina layer elasticity based on automatic 3D segmentation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    He, Youmin; Qu, Yueqiao; Zhang, Yi; Ma, Teng; Zhu, Jiang; Miao, Yusi; Humayun, Mark; Zhou, Qifa; Chen, Zhongping

    2017-02-01

    Age-related macular degeneration (AMD) is an eye condition that is considered to be one of the leading causes of blindness among people over 50. Recent studies suggest that the mechanical properties in retina layers are affected during the early onset of disease. Therefore, it is necessary to identify such changes in the individual layers of the retina so as to provide useful information for disease diagnosis. In this study, we propose using an acoustic radiation force optical coherence elastography (ARF-OCE) system to dynamically excite the porcine retina and detect the vibrational displacement with phase resolved Doppler optical coherence tomography. Due to the vibrational mechanism of the tissue response, the image quality is compromised during elastogram acquisition. In order to properly analyze the images, all signals, including the trigger and control signals for excitation, as well as detection and scanning signals, are synchronized within the OCE software and are kept consistent between frames, making it possible for easy phase unwrapping and elasticity analysis. In addition, a combination of segmentation algorithms is used to accommodate the compromised image quality. An automatic 3D segmentation method has been developed to isolate and measure the relative elasticity of every individual retinal layer. Two different segmentation schemes based on random walker and dynamic programming are implemented. The algorithm has been validated using a 3D region of the porcine retina, where individual layers have been isolated and analyzed using statistical methods. The errors compared to manual segmentation will be calculated.

  12. Salient target detection based on pseudo-Wigner-Ville distribution and Rényi entropy.

    PubMed

    Xu, Yuannan; Zhao, Yuan; Jin, Chenfei; Qu, Zengfeng; Liu, Liping; Sun, Xiudong

    2010-02-15

    We present what we believe to be a novel method based on pseudo-Wigner-Ville distribution (PWVD) and Rényi entropy for salient targets detection. In the foundation of studying the statistical property of Rényi entropy via PWVD, the residual entropy-based saliency map of an input image can be obtained. From the saliency map, target detection is completed by the simple and convenient threshold segmentation. Experimental results demonstrate the proposed method can detect targets effectively in complex ground scenes.

  13. Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation.

    PubMed

    Quiroga-Lombard, Claudio S; Hass, Joachim; Durstewitz, Daniel

    2013-07-01

    Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then "slicing" spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.

  14. A semiautomatic segmentation method for prostate in CT images using local texture classification and statistical shape modeling.

    PubMed

    Shahedi, Maysam; Halicek, Martin; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2018-06-01

    Prostate segmentation in computed tomography (CT) images is useful for treatment planning and procedure guidance such as external beam radiotherapy and brachytherapy. However, because of the low, soft tissue contrast of CT images, manual segmentation of the prostate is a time-consuming task with high interobserver variation. In this study, we proposed a semiautomated, three-dimensional (3D) segmentation for prostate CT images using shape and texture analysis and we evaluated the method against manual reference segmentations. The prostate gland usually has a globular shape with a smoothly curved surface, and its shape could be accurately modeled or reconstructed having a limited number of well-distributed surface points. In a training dataset, using the prostate gland centroid point as the origin of a coordination system, we defined an intersubject correspondence between the prostate surface points based on the spherical coordinates. We applied this correspondence to generate a point distribution model for prostate shape using principal component analysis and to study the local texture difference between prostate and nonprostate tissue close to the different prostate surface subregions. We used the learned shape and texture characteristics of the prostate in CT images and then combined them with user inputs to segment a new image. We trained our segmentation algorithm using 23 CT images and tested the algorithm on two sets of 10 nonbrachytherapy and 37 postlow dose rate brachytherapy CT images. We used a set of error metrics to evaluate the segmentation results using two experts' manual reference segmentations. For both nonbrachytherapy and post-brachytherapy image sets, the average measured Dice similarity coefficient (DSC) was 88% and the average mean absolute distance (MAD) was 1.9 mm. The average measured differences between the two experts on both datasets were 92% (DSC) and 1.1 mm (MAD). The proposed, semiautomatic segmentation algorithm showed a fast, robust, and accurate performance for 3D prostate segmentation of CT images, specifically when no previous, intrapatient information, that is, previously segmented images, was available. The accuracy of the algorithm is comparable to the best performance results reported in the literature and approaches the interexpert variability observed in manual segmentation. © 2018 American Association of Physicists in Medicine.

  15. Machine learning methods can replace 3D profile method in classification of amyloidogenic hexapeptides.

    PubMed

    Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd

    2013-01-17

    Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset proved representative enough to use simple statistical methods for testing the amylogenicity based only on six letter sequences. Statistical machine learning methods such as Alternating Decision Tree and Multilayer Perceptron can replace the energy based classifier, with advantage of very significantly reduced computational time and simplicity to perform the analysis. Additionally, a decision tree provides a set of very easily interpretable rules.

  16. Tumor or abnormality identification from magnetic resonance images using statistical region fusion based segmentation.

    PubMed

    Subudhi, Badri Narayan; Thangaraj, Veerakumar; Sankaralingam, Esakkirajan; Ghosh, Ashish

    2016-11-01

    In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. A method of 2D/3D registration of a statistical mouse atlas with a planar X-ray projection and an optical photo.

    PubMed

    Wang, Hongkai; Stout, David B; Chatziioannou, Arion F

    2013-05-01

    The development of sophisticated and high throughput whole body small animal imaging technologies has created a need for improved image analysis and increased automation. The registration of a digital mouse atlas to individual images is a prerequisite for automated organ segmentation and uptake quantification. This paper presents a fully-automatic method for registering a statistical mouse atlas with individual subjects based on an anterior-posterior X-ray projection and a lateral optical photo of the mouse silhouette. The mouse atlas was trained as a statistical shape model based on 83 organ-segmented micro-CT images. For registration, a hierarchical approach is applied which first registers high contrast organs, and then estimates low contrast organs based on the registered high contrast organs. To register the high contrast organs, a 2D-registration-back-projection strategy is used that deforms the 3D atlas based on the 2D registrations of the atlas projections. For validation, this method was evaluated using 55 subjects of preclinical mouse studies. The results showed that this method can compensate for moderate variations of animal postures and organ anatomy. Two different metrics, the Dice coefficient and the average surface distance, were used to assess the registration accuracy of major organs. The Dice coefficients vary from 0.31 ± 0.16 for the spleen to 0.88 ± 0.03 for the whole body, and the average surface distance varies from 0.54 ± 0.06 mm for the lungs to 0.85 ± 0.10mm for the skin. The method was compared with a direct 3D deformation optimization (without 2D-registration-back-projection) and a single-subject atlas registration (instead of using the statistical atlas). The comparison revealed that the 2D-registration-back-projection strategy significantly improved the registration accuracy, and the use of the statistical mouse atlas led to more plausible organ shapes than the single-subject atlas. This method was also tested with shoulder xenograft tumor-bearing mice, and the results showed that the registration accuracy of most organs was not significantly affected by the presence of shoulder tumors, except for the lungs and the spleen. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  19. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  20. Development of a model of the coronary arterial tree for the 4D XCAT phantom

    NASA Astrophysics Data System (ADS)

    Fung, George S. K.; Segars, W. Paul; Gullberg, Grant T.; Tsui, Benjamin M. W.

    2011-09-01

    A detailed three-dimensional (3D) model of the coronary artery tree with cardiac motion has great potential for applications in a wide variety of medical imaging research areas. In this work, we first developed a computer-generated 3D model of the coronary arterial tree for the heart in the extended cardiac-torso (XCAT) phantom, thereby creating a realistic computer model of the human anatomy. The coronary arterial tree model was based on two datasets: (1) a gated cardiac dual-source computed tomography (CT) angiographic dataset obtained from a normal human subject and (2) statistical morphometric data of porcine hearts. The initial proximal segments of the vasculature and the anatomical details of the boundaries of the ventricles were defined by segmenting the CT data. An iterative rule-based generation method was developed and applied to extend the coronary arterial tree beyond the initial proximal segments. The algorithm was governed by three factors: (1) statistical morphometric measurements of the connectivity, lengths and diameters of the arterial segments; (2) avoidance forces from other vessel segments and the boundaries of the myocardium, and (3) optimality principles which minimize the drag force at the bifurcations of the generated tree. Using this algorithm, the 3D computational model of the largest six orders of the coronary arterial tree was generated, which spread across the myocardium of the left and right ventricles. The 3D coronary arterial tree model was then extended to 4D to simulate different cardiac phases by deforming the original 3D model according to the motion vector map of the 4D cardiac model of the XCAT phantom at the corresponding phases. As a result, a detailed and realistic 4D model of the coronary arterial tree was developed for the XCAT phantom by imposing constraints of anatomical and physiological characteristics of the coronary vasculature. This new 4D coronary artery tree model provides a unique simulation tool that can be used in the development and evaluation of instrumentation and methods for imaging normal and pathological hearts with myocardial perfusion defects.

  1. An operational definition of a statistically meaningful trend.

    PubMed

    Bryhn, Andreas C; Dimberg, Peter H

    2011-04-28

    Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.

  2. Dental measurements and Bolton index reliability and accuracy obtained from 2D digital, 3D segmented CBCT, and 3d intraoral laser scanner

    PubMed Central

    San José, Verónica; Bellot-Arcís, Carlos; Tarazona, Beatriz; Zamora, Natalia; O Lagravère, Manuel

    2017-01-01

    Background To compare the reliability and accuracy of direct and indirect dental measurements derived from two types of 3D virtual models: generated by intraoral laser scanning (ILS) and segmented cone beam computed tomography (CBCT), comparing these with a 2D digital model. Material and Methods One hundred patients were selected. All patients’ records included initial plaster models, an intraoral scan and a CBCT. Patients´ dental arches were scanned with the iTero® intraoral scanner while the CBCTs were segmented to create three-dimensional models. To obtain 2D digital models, plaster models were scanned using a conventional 2D scanner. When digital models had been obtained using these three methods, direct dental measurements were measured and indirect measurements were calculated. Differences between methods were assessed by means of paired t-tests and regression models. Intra and inter-observer error were analyzed using Dahlberg´s d and coefficients of variation. Results Intraobserver and interobserver error for the ILS model was less than 0.44 mm while for segmented CBCT models, the error was less than 0.97 mm. ILS models provided statistically and clinically acceptable accuracy for all dental measurements, while CBCT models showed a tendency to underestimate measurements in the lower arch, although within the limits of clinical acceptability. Conclusions ILS and CBCT segmented models are both reliable and accurate for dental measurements. Integration of ILS with CBCT scans would get dental and skeletal information altogether. Key words:CBCT, intraoral laser scanner, 2D digital models, 3D models, dental measurements, reliability. PMID:29410764

  3. Deformable segmentation via sparse representation and dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Automated posterior cranial fossa volumetry by MRI: applications to Chiari malformation type I.

    PubMed

    Bagci, A M; Lee, S H; Nagornaya, N; Green, B A; Alperin, N

    2013-09-01

    Quantification of PCF volume and the degree of PCF crowdedness were found beneficial for differential diagnosis of tonsillar herniation and prediction of surgical outcome in CMI. However, lack of automated methods limits the clinical use of PCF volumetry. An atlas-based method for automated PCF segmentation tailored for CMI is presented. The method performance is assessed in terms of accuracy and spatial overlap with manual segmentation. The degree of association between PCF volumes and the lengths of previously proposed linear landmarks is reported. T1-weighted volumetric MR imaging data with 1-mm isotropic resolution obtained with the use of a 3T scanner from 14 patients with CMI and 3 healthy subjects were used for the study. Manually delineated PCF from 9 patients was used to establish a CMI-specific reference for an atlas-based automated PCF parcellation approach. Agreement between manual and automated segmentation of 5 different CMI datasets was verified by means of the t test. Measurement reproducibility was established through the use of 2 repeated scans from 3 healthy subjects. Degree of linear association between PCF volume and 6 linear landmarks was determined by means of Pearson correlation. PCF volumes measured by use of the automated method and with manual delineation were similar, 196.2 ± 8.7 mL versus 196.9 ± 11.0 mL, respectively. The mean relative difference of -0.3 ± 1.9% was not statistically significant. Low measurement variability, with a mean absolute percentage value of 0.6 ± 0.2%, was achieved. None of the PCF linear landmarks were significantly associated with PCF volume. PCF and tissue content volumes can be reliably measured in patients with CMI by use of an atlas-based automated segmentation method.

  5. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470

  6. Detection of infarct lesions from single MRI modality using inconsistency between voxel intensity and spatial location--a 3-D automatic approach.

    PubMed

    Shen, Shan; Szameitat, André J; Sterr, Annette

    2008-07-01

    Detection of infarct lesions using traditional segmentation methods is always problematic due to intensity similarity between lesions and normal tissues, so that multispectral MRI modalities were often employed for this purpose. However, the high costs of MRI scan and the severity of patient conditions restrict the collection of multiple images. Therefore, in this paper, a new 3-D automatic lesion detection approach was proposed, which required only a single type of anatomical MRI scan. It was developed on a theory that, when lesions were present, the voxel-intensity-based segmentation and the spatial-location-based tissue distribution should be inconsistent in the regions of lesions. The degree of this inconsistency was calculated, which indicated the likelihood of tissue abnormality. Lesions were identified when the inconsistency exceeded a defined threshold. In this approach, the intensity-based segmentation was implemented by the conventional fuzzy c-mean (FCM) algorithm, while the spatial location of tissues was provided by prior tissue probability maps. The use of simulated MRI lesions allowed us to quantitatively evaluate the performance of the proposed method, as the size and location of lesions were prespecified. The results showed that our method effectively detected lesions with 40-80% signal reduction compared to normal tissues (similarity index > 0.7). The capability of the proposed method in practice was also demonstrated on real infarct lesions from 15 stroke patients, where the lesions detected were in broad agreement with true lesions. Furthermore, a comparison to a statistical segmentation approach presented in the literature suggested that our 3-D lesion detection approach was more reliable. Future work will focus on adapting the current method to multiple sclerosis lesion detection.

  7. SU-D-202-03: Statistical Segmentation On Quantitative CT for Assessing Spatial Tumor Response During Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schott, D; Chen, X; Klawikowski, S

    2016-06-15

    Purpose: Develop a method to segment regions of interest (ROIs) in tumor with statistically similar Hounsfield unit (HU) values and/or HU changes during chemoradiation therapy (CRT) delivery, to assess spatial tumor treatment response based on daily CTs during CRT delivery. Methods: Generate a three region map of ROIs with differential HUs, by sampling neighboring voxels around a selected voxel and comparing to the mean of the entire ROI using a t-test. The cumulative distribution function, P, is calculated from the t-test. The P value is assigned to be the value at the selected voxel, and this is repeated over allmore » voxels in the initial ROI. Three regions are defined as: (1-P) < 0.00001 (mid region), and 0.00001 < (1-P) (mean greater than baseline and mean lower than baseline). The test is then expanded to compare daily CT sets acquired during routine CT-guided RT delivery using a CT-on-rails. The first fraction CT is used as the baseline for comparison. We tested 15 pancreatic head tumor cases undergoing CRT, to identify the ROIs and changes corresponding to normal, fibrotic, and tumor tissue. The obtained ROIs were compared with MRI-ADC maps acquired pre- and post-CRT. Results: The ROIs in 13 out of 15 patients’ first fraction CTs and pre-CRT MRIs matched the general region and slices covered, as well as in 6 out of the 9 patients with post-CRT MRIs. The high HU region designated by the t-test was seen to correlate with the tumor region in MR, and these ROIs are positioned within the same region over the course of treatment. In patients with poorly delineated tumors in MR, the t-test was inconclusive. Conclusion: The proposed statistical segmentation technique shows the potential to identify regions in tumor with differential HUs and HU changes during CRT delivery for patients with pancreas head cancer.« less

  8. Development of techniques for producing static strata maps and development of photointerpretive methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator); Hay, C. M.; Thomas, R. W.; Benson, A. S.

    1977-01-01

    Progress in the evaluation of the static stratification procedure and the development of alternative photointerpretive techniques to the present LACIE procedure for the identification of training fields is reported. Statistically significant signature controlling variables were defined for use in refining the stratification procedure. A subset of the 1973-74 Kansas LACIE segments for wheat was analyzed.

  9. Segmentation-based L-filtering of speckle noise in ultrasonic images

    NASA Astrophysics Data System (ADS)

    Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis

    1994-05-01

    We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.

  10. Enhanced statistical tests for GWAS in admixed populations: assessment using African Americans from CARe and a Breast Cancer Consortium.

    PubMed

    Pasaniuc, Bogdan; Zaitlen, Noah; Lettre, Guillaume; Chen, Gary K; Tandon, Arti; Kao, W H Linda; Ruczinski, Ingo; Fornage, Myriam; Siscovick, David S; Zhu, Xiaofeng; Larkin, Emma; Lange, Leslie A; Cupples, L Adrienne; Yang, Qiong; Akylbekova, Ermeg L; Musani, Solomon K; Divers, Jasmin; Mychaleckyj, Joe; Li, Mingyao; Papanicolaou, George J; Millikan, Robert C; Ambrosone, Christine B; John, Esther M; Bernstein, Leslie; Zheng, Wei; Hu, Jennifer J; Ziegler, Regina G; Nyante, Sarah J; Bandera, Elisa V; Ingles, Sue A; Press, Michael F; Chanock, Stephen J; Deming, Sandra L; Rodriguez-Gil, Jorge L; Palmer, Cameron D; Buxbaum, Sarah; Ekunwe, Lynette; Hirschhorn, Joel N; Henderson, Brian E; Myers, Simon; Haiman, Christopher A; Reich, David; Patterson, Nick; Wilson, James G; Price, Alkes L

    2011-04-01

    While genome-wide association studies (GWAS) have primarily examined populations of European ancestry, more recent studies often involve additional populations, including admixed populations such as African Americans and Latinos. In admixed populations, linkage disequilibrium (LD) exists both at a fine scale in ancestral populations and at a coarse scale (admixture-LD) due to chromosomal segments of distinct ancestry. Disease association statistics in admixed populations have previously considered SNP association (LD mapping) or admixture association (mapping by admixture-LD), but not both. Here, we introduce a new statistical framework for combining SNP and admixture association in case-control studies, as well as methods for local ancestry-aware imputation. We illustrate the gain in statistical power achieved by these methods by analyzing data of 6,209 unrelated African Americans from the CARe project genotyped on the Affymetrix 6.0 chip, in conjunction with both simulated and real phenotypes, as well as by analyzing the FGFR2 locus using breast cancer GWAS data from 5,761 African-American women. We show that, at typed SNPs, our method yields an 8% increase in statistical power for finding disease risk loci compared to the power achieved by standard methods in case-control studies. At imputed SNPs, we observe an 11% increase in statistical power for mapping disease loci when our local ancestry-aware imputation framework and the new scoring statistic are jointly employed. Finally, we show that our method increases statistical power in regions harboring the causal SNP in the case when the causal SNP is untyped and cannot be imputed. Our methods and our publicly available software are broadly applicable to GWAS in admixed populations.

  11. Discriminative confidence estimation for probabilistic multi-atlas label fusion.

    PubMed

    Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard

    2017-12-01

    Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Evaluation of the Transverse Displacement of the Proximal Segment After Bilateral Sagittal Split Ramus Osteotomy With Different Lingual Split Patterns and Advancement Amounts Using the Finite Element Method.

    PubMed

    Dai, Zhi; Hou, Min; Ma, Wen; Song, Da-Li; Zhang, Chun-Xiang; Zhou, Wei-Yuan

    2016-11-01

    To evaluate transverse displacement of the proximal segment after bilateral sagittal split ramus osteotomy (BSSO) advancement with different lingual split patterns and advancement amounts and to determine the influential factors related to mandibular width. A 3-dimensional finite element model of the mandible including the temporomandibular joint was created for a presurgical simulation and for BSSO with lingual split patterns I (T1; Hunsuck split) and II (T2; Obwegeser split). The mandible was advanced 3 mm (A3) and 8 mm (A8) and fixated with a conventional titanium plate. Ansys software was used to measure the linear distances of the interproximal segments and to analyze the transverse displacement distribution of proximal segments after applying the load of masticatory muscle force groups. After surgical simulation, T1A3, T1A8, T2A3, and T2A8 showed increased transverse widths (mean, 2.99, 4.70, 2.36, and 4.42 mm, respectively). For transverse augmentation, there was a statistically significant difference between the 2 different mandibular advancement amounts in T1 and in T2 (P ≤ .000), but no significant differences was observed between T1 and T2 (P ≥ .058). The maximum transverse displacement distribution in the proximal segment was measured around the gonial area, and the early contact area was found near the border between the horizontal and sagittal osteotomy lines. Transverse displacements of proximal segments occur after BSSO advancement with T1 and T2 and transverse augmentation has statistically meaningful effects depending on the amount of advancement; however, no differences in transverse augmentation between T1 and T2 were identified. The fulcrum caused by the early contact between the proximal and distal segments could be an influential factor related to mandibular width. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Watershed Regressions for Pesticides (WARP) models for predicting stream concentrations of multiple pesticides

    USGS Publications Warehouse

    Stone, Wesley W.; Crawford, Charles G.; Gilliom, Robert J.

    2013-01-01

    Watershed Regressions for Pesticides for multiple pesticides (WARP-MP) are statistical models developed to predict concentration statistics for a wide range of pesticides in unmonitored streams. The WARP-MP models use the national atrazine WARP models in conjunction with an adjustment factor for each additional pesticide. The WARP-MP models perform best for pesticides with application timing and methods similar to those used with atrazine. For other pesticides, WARP-MP models tend to overpredict concentration statistics for the model development sites. For WARP and WARP-MP, the less-than-ideal sampling frequency for the model development sites leads to underestimation of the shorter-duration concentration; hence, the WARP models tend to underpredict 4- and 21-d maximum moving-average concentrations, with median errors ranging from 9 to 38% As a result of this sampling bias, pesticides that performed well with the model development sites are expected to have predictions that are biased low for these shorter-duration concentration statistics. The overprediction by WARP-MP apparent for some of the pesticides is variably offset by underestimation of the model development concentration statistics. Of the 112 pesticides used in the WARP-MP application to stream segments nationwide, 25 were predicted to have concentration statistics with a 50% or greater probability of exceeding one or more aquatic life benchmarks in one or more stream segments. Geographically, many of the modeled streams in the Corn Belt Region were predicted to have one or more pesticides that exceeded an aquatic life benchmark during 2009, indicating the potential vulnerability of streams in this region.

  14. A random forest model based classification scheme for neonatal amplitude-integrated EEG.

    PubMed

    Chen, Weiting; Wang, Yu; Cao, Guitao; Chen, Guoqiang; Gu, Qiufang

    2014-01-01

    Modern medical advances have greatly increased the survival rate of infants, while they remain in the higher risk group for neurological problems later in life. For the infants with encephalopathy or seizures, identification of the extent of brain injury is clinically challenging. Continuous amplitude-integrated electroencephalography (aEEG) monitoring offers a possibility to directly monitor the brain functional state of the newborns over hours, and has seen an increasing application in neonatal intensive care units (NICUs). This paper presents a novel combined feature set of aEEG and applies random forest (RF) method to classify aEEG tracings. To that end, a series of experiments were conducted on 282 aEEG tracing cases (209 normal and 73 abnormal ones). Basic features, statistic features and segmentation features were extracted from both the tracing as a whole and the segmented recordings, and then form a combined feature set. All the features were sent to a classifier afterwards. The significance of feature, the data segmentation, the optimization of RF parameters, and the problem of imbalanced datasets were examined through experiments. Experiments were also done to evaluate the performance of RF on aEEG signal classifying, compared with several other widely used classifiers including SVM-Linear, SVM-RBF, ANN, Decision Tree (DT), Logistic Regression(LR), ML, and LDA. The combined feature set can better characterize aEEG signals, compared with basic features, statistic features and segmentation features respectively. With the combined feature set, the proposed RF-based aEEG classification system achieved a correct rate of 92.52% and a high F1-score of 95.26%. Among all of the seven classifiers examined in our work, the RF method got the highest correct rate, sensitivity, specificity, and F1-score, which means that RF outperforms all of the other classifiers considered here. The results show that the proposed RF-based aEEG classification system with the combined feature set is efficient and helpful to better detect the brain disorders in newborns.

  15. Prostate segmentation in MR images using discriminant boundary features.

    PubMed

    Yang, Meijuan; Li, Xuelong; Turkbey, Baris; Choyke, Peter L; Yan, Pingkun

    2013-02-01

    Segmentation of the prostate in magnetic resonance image has become more in need for its assistance to diagnosis and surgical planning of prostate carcinoma. Due to the natural variability of anatomical structures, statistical shape model has been widely applied in medical image segmentation. Robust and distinctive local features are critical for statistical shape model to achieve accurate segmentation results. The scale invariant feature transformation (SIFT) has been employed to capture the information of the local patch surrounding the boundary. However, when SIFT feature being used for segmentation, the scale and variance are not specified with the location of the point of interest. To deal with it, the discriminant analysis in machine learning is introduced to measure the distinctiveness of the learned SIFT features for each landmark directly and to make the scale and variance adaptive to the locations. As the gray values and gradients vary significantly over the boundary of the prostate, separate appearance descriptors are built for each landmark and then optimized. After that, a two stage coarse-to-fine segmentation approach is carried out by incorporating the local shape variations. Finally, the experiments on prostate segmentation from MR image are conducted to verify the efficiency of the proposed algorithms.

  16. Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums

    NASA Astrophysics Data System (ADS)

    Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei

    2016-12-01

    Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.

  17. Reliability of semiautomated computational methods for estimating tibiofemoral contact stress in the Multicenter Osteoarthritis Study.

    PubMed

    Anderson, Donald D; Segal, Neil A; Kern, Andrew M; Nevitt, Michael C; Torner, James C; Lynch, John A

    2012-01-01

    Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands) need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater) for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs). The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93-0.99) and good inter-rater reliability (0.84-0.97). This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  18. [A clinical study on different decompression methods in cervical spondylosis].

    PubMed

    Ma, Xun; Zhao, Xiao-fei; Zhao, Yi-bo

    2009-04-15

    To analyze the different decompression methods to treat cervical spondylosis based on imageological evaluation. Two hundred and sixty three consecutive patients with cervical spondylosis between Nov. 2004 and Oct. 2007 were involved in this study. Patients were distributed to different operation groups based on the preoperative imageological evaluation, including anterior or posterior decompression methods. The Anterior method is to use the discectomy of one to three segments, autogenous iliac graft or titanium mesh or cage fusion and titanium plate fixation, or subtotal vertebrectomy of one to two segments autogenous iliac graft or titanium mesh fusion and titanium plate fixation, or discectomy plus subtotal vertebrectomy, The posterior expansive single open door laminoplasty and other operation types. All the patients were divided into different groups by the preoperative imageological evaluation, age, sex and course of diseases. Then we collected each group's preoperative and postoperative JOA scores and mean improvement rate to evaluate the postoperative effect by different decompression methods. Two hundred and thirty five patients were followed up with a mean period of 18 months (range, 4 to 36 months). JOA scores of all patients were improved by different degrees after operations. Anterior and posterior decompression methods both can achieve higher mean improvement rates. There were no significant differences in mean improvement rates between anterior groups, and so did male and female (P > 0.05). The effect will decrease as age increases or the course of disease prolongs. Statistical significance existed among the different age groups and between course groups (P < 0.05). Anterior and posterior decompression methods both can achieve good effect. The key point is to choose the surgical indication correctly, decompress thoroughly, and make the fusion reliable and fixation firm. In regard to the patients' imageological evaluation, the methods should be differentiated. The anterior operation type included discectomy of one to three segments, subtotal vertebrectomy of one to two segments and discectomy plus subtotal vertebra ectomy.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe; De Bernardi, Elisabetta

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previousmore » analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was demonstrated. The inclusion of the spatial prior improved segmentation accuracy only for lesions surrounded by heterogeneous background: in the relevant simulation subset, the median VE significantly decreased from 13% to 7%. Results on clinical data were found in accordance with simulations, with absolute VE <7%, Dice >0.85, CE <0.30, and HD <0.81. Conclusions: The sole introduction of constraints based on background modeling outperformed standard GMM and the other tested algorithms. Insertion of a spatial prior improved the accuracy for realistic cases of objects in heterogeneous backgrounds. Moreover, robustness against initialization supports the applicability in a clinical setting. In conclusion, application-driven constraints can generally improve the capabilities of GMM and statistical clustering algorithms.« less

  20. Random forest learning of ultrasonic statistical physics and object spaces for lesion detection in 2D sonomammography

    NASA Astrophysics Data System (ADS)

    Sheet, Debdoot; Karamalis, Athanasios; Kraft, Silvan; Noël, Peter B.; Vag, Tibor; Sadhu, Anup; Katouzian, Amin; Navab, Nassir; Chatterjee, Jyotirmoy; Ray, Ajoy K.

    2013-03-01

    Breast cancer is the most common form of cancer in women. Early diagnosis can significantly improve lifeexpectancy and allow different treatment options. Clinicians favor 2D ultrasonography for breast tissue abnormality screening due to high sensitivity and specificity compared to competing technologies. However, inter- and intra-observer variability in visual assessment and reporting of lesions often handicaps its performance. Existing Computer Assisted Diagnosis (CAD) systems though being able to detect solid lesions are often restricted in performance. These restrictions are inability to (1) detect lesion of multiple sizes and shapes, and (2) differentiate between hypo-echoic lesions from their posterior acoustic shadowing. In this work we present a completely automatic system for detection and segmentation of breast lesions in 2D ultrasound images. We employ random forests for learning of tissue specific primal to discriminate breast lesions from surrounding normal tissues. This enables it to detect lesions of multiple shapes and sizes, as well as discriminate between hypo-echoic lesion from associated posterior acoustic shadowing. The primal comprises of (i) multiscale estimated ultrasonic statistical physics and (ii) scale-space characteristics. The random forest learns lesion vs. background primal from a database of 2D ultrasound images with labeled lesions. For segmentation, the posterior probabilities of lesion pixels estimated by the learnt random forest are hard thresholded to provide a random walks segmentation stage with starting seeds. Our method achieves detection with 99.19% accuracy and segmentation with mean contour-to-contour error < 3 pixels on a set of 40 images with 49 lesions.

  1. Comparison of statistical algorithms for detecting homogeneous river reaches along a longitudinal continuum

    NASA Astrophysics Data System (ADS)

    Leviandier, Thierry; Alber, A.; Le Ber, F.; Piégay, H.

    2012-02-01

    Seven methods designed to delineate homogeneous river segments, belonging to four families, namely — tests of homogeneity, contrast enhancing, spatially constrained classification, and hidden Markov models — are compared, firstly on their principles, then on a case study, and on theoretical templates. These templates contain patterns found in the case study but not considered in the standard assumptions of statistical methods, such as gradients and curvilinear structures. The influence of data resolution, noise and weak satisfaction of the assumptions underlying the methods is investigated. The control of the number of reaches obtained in order to achieve meaningful comparisons is discussed. No method is found that outperforms all the others on all trials. However, the methods with sequential algorithms (keeping at order n + 1 all breakpoints found at order n) fail more often than those running complete optimisation at any order. The Hubert-Kehagias method and Hidden Markov Models are the most successful at identifying subpatterns encapsulated within the templates. Ergodic Hidden Markov Models are, moreover, liable to exhibit transition areas.

  2. Neutrosophic segmentation of breast lesions for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.

    2017-03-01

    We proposed the neutrosophic approach for segmenting breast lesions in breast Computer Tomography (bCT) images. The neutrosophic set (NS) considers the nature and properties of neutrality (or indeterminacy), which is neither true nor false. We considered the image noise as an indeterminate component, while treating the breast lesion and other breast areas as true and false components. We first transformed the image into the NS domain. Each voxel in the image can be described as its membership in True, Indeterminate, and False sets. Operations α-mean, β-enhancement, and γ-plateau iteratively smooth and contrast-enhance the image to reduce the noise level of the true set. Once the true image no longer changes, we applied one existing algorithm for bCT images, the RGI segmentation, on the resulting image to segment the breast lesions. We compared the segmentation performance of the proposed method (named as NS-RGI) to that of the regular RGI segmentation. We used a total of 122 breast lesions (44 benign, 78 malignant) of 123 non-contrasted bCT cases. We measured the segmentation performances of the NS-RGI and the RGI using the DICE coefficient. The average DICE value of the NS-RGI was 0.82 (STD: 0.09), while that of the RGI was 0.8 (STD: 0.12). The difference between the two DICE values was statistically significant (paired t test, p-value = 0.0007). We conducted a subsequent feature analysis on the resulting segmentations. The classifier performance for the NS-RGI (AUC = 0.8) improved over that of the RGI (AUC = 0.69, p-value = 0.006).

  3. A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features

    NASA Astrophysics Data System (ADS)

    Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron

    2005-04-01

    Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.

  4. Shape regularized active contour based on dynamic programming for anatomical structure segmentation

    NASA Astrophysics Data System (ADS)

    Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra

    2005-04-01

    We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.

  5. Wave chaos in a randomly inhomogeneous waveguide: spectral analysis of the finite-range evolution operator.

    PubMed

    Makarov, D V; Kon'kov, L E; Uleysky, M Yu; Petrov, P S

    2013-01-01

    The problem of sound propagation in a randomly inhomogeneous oceanic waveguide is considered. An underwater sound channel in the Sea of Japan is taken as an example. Our attention is concentrated on the domains of finite-range ray stability in phase space and their influence on wave dynamics. These domains can be found by means of the one-step Poincare map. To study manifestations of finite-range ray stability, we introduce the finite-range evolution operator (FREO) describing transformation of a wave field in the course of propagation along a finite segment of a waveguide. Carrying out statistical analysis of the FREO spectrum, we estimate the contribution of regular domains and explore their evanescence with increasing length of the segment. We utilize several methods of spectral analysis: analysis of eigenfunctions by expanding them over modes of the unperturbed waveguide, approximation of level-spacing statistics by means of the Berry-Robnik distribution, and the procedure used by A. Relano and coworkers [Relano et al., Phys. Rev. Lett. 89, 244102 (2002); Relano, Phys. Rev. Lett. 100, 224101 (2008)]. Comparing the results obtained with different methods, we find that the method based on the statistical analysis of FREO eigenfunctions is the most favorable for estimating the contribution of regular domains. It allows one to find directly the waveguide modes whose refraction is regular despite the random inhomogeneity. For example, it is found that near-axial sound propagation in the Sea of Japan preserves stability even over distances of hundreds of kilometers due to the presence of a shearless torus in the classical phase space. Increasing the acoustic wavelength degrades scattering, resulting in recovery of eigenfunction localization near periodic orbits of the one-step Poincaré map.

  6. Comparison of T1-weighted 2D TSE, 3D SPGR, and two-point 3D Dixon MRI for automated segmentation of visceral adipose tissue at 3 Tesla.

    PubMed

    Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin

    2017-04-01

    To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.

  7. Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A

    PubMed Central

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2012-01-01

    Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768

  8. Hybrid Surgery Combined with Dynamic Stabilization System and Fusion for the Multilevel Degenerative Disease of the Lumbosacral Spine

    PubMed Central

    Lee, Soo Eon; Kim, Hyun Jib

    2015-01-01

    Background As motion-preserving technique has been developed, the concept of hybrid surgery involves simultaneous application of two different kinds of devices, dynamic stabilization system and fusion technique. In the present study, the application of hybrid surgery for lumbosacral degenerative disease involving two-segments and its long-term outcome were investigated. Methods Fifteen patients with hybrid surgery (Hybrid group) and 10 patients with two-segment fusion (Fusion group) were retrospectively compared. Results Preoperative grade for disc degeneration was not different between the two groups, and the most common operated segment had the most degenerated disc grade in both groups; L4-5 and L5-S1 in the Hybrid group, and L3-4 and L4-5 in Fusion group. Over 48 months of follow-up, lumbar lordosis and range of motion (ROM) at the T12-S1 global segment were preserved in the Hybrid group, and the segmental ROM at the dynamic stabilized segment maintained at final follow-up. The Fusion group had a significantly decreased global ROM and a decreased segmental ROM with larger angles compared to the Hybrid group. Defining a 2-mm decrease in posterior disc height (PDH) as radiologic adjacent segment pathology (ASP), these changes were observed in 6 and 7 patients in the Hybrid and Fusion group, respectively. However, the last PDH at the above adjacent segment had statistically higher value in Hybrid group. Pain score for back and legs was much reduced in both groups. Functional outcome measured by Oswestry disability index (ODI), however, had better improvement in Hybrid group. Conclusion Hybrid surgery, combined dynamic stabilization system and fusion, can be effective surgical treatment for multilevel degenerative lumbosacral spinal disease, maintaining lumbar motion and delaying disc degeneration. PMID:26484008

  9. Automated Multi-Atlas Segmentation of Hippocampal and Extrahippocampal Subregions in Alzheimer's Disease at 3T and 7T: What Atlas Composition Works Best?

    PubMed

    Xie, Long; Shinohara, Russell T; Ittyerah, Ranjit; Kuijf, Hugo J; Pluta, John B; Blom, Kim; Kooistra, Minke; Reijmer, Yael D; Koek, Huiberdina L; Zwanenburg, Jaco J M; Wang, Hongzhi; Luijten, Peter R; Geerlings, Mirjam I; Das, Sandhitsu R; Biessels, Geert Jan; Wolk, David A; Yushkevich, Paul A; Wisse, Laura E M

    2018-01-01

    Multi-atlas segmentation, a popular technique implemented in the Automated Segmentation of Hippocampal Subfields (ASHS) software, utilizes multiple expert-labelled images ("atlases") to delineate medial temporal lobe substructures. This multi-atlas method is increasingly being employed in early Alzheimer's disease (AD) research, it is therefore becoming important to know how the construction of the atlas set in terms of proportions of controls and patients with mild cognitive impairment (MCI) and/or AD affects segmentation accuracy. To evaluate whether the proportion of controls in the training sets affects the segmentation accuracy of both controls and patients with MCI and/or early AD at 3T and 7T. We performed cross-validation experiments varying the proportion of control subjects in the training set, ranging from a patient-only to a control-only set. Segmentation accuracy of the test set was evaluated by the Dice similarity coeffiecient (DSC). A two-stage statistical analysis was applied to determine whether atlas composition is linked to segmentation accuracy in control subjects and patients, for 3T and 7T. The different atlas compositions did not significantly affect segmentation accuracy at 3T and for patients at 7T. For controls at 7T, including more control subjects in the training set significantly improves the segmentation accuracy, but only marginally, with the maximum of 0.0003 DSC improvement per percent increment of control subject in the training set. ASHS is robust in this study, and the results indicate that future studies investigating hippocampal subfields in early AD populations can be flexible in the selection of their atlas compositions.

  10. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    NASA Astrophysics Data System (ADS)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  11. Spatio-Temporal Regularization for Longitudinal Registration to Subject-Specific 3d Template

    PubMed Central

    Guizard, Nicolas; Fonov, Vladimir S.; García-Lorenzo, Daniel; Nakamura, Kunio; Aubert-Broche, Bérengère; Collins, D. Louis

    2015-01-01

    Neurodegenerative diseases such as Alzheimer's disease present subtle anatomical brain changes before the appearance of clinical symptoms. Manual structure segmentation is long and tedious and although automatic methods exist, they are often performed in a cross-sectional manner where each time-point is analyzed independently. With such analysis methods, bias, error and longitudinal noise may be introduced. Noise due to MR scanners and other physiological effects may also introduce variability in the measurement. We propose to use 4D non-linear registration with spatio-temporal regularization to correct for potential longitudinal inconsistencies in the context of structure segmentation. The major contribution of this article is the use of individual template creation with spatio-temporal regularization of the deformation fields for each subject. We validate our method with different sets of real MRI data, compare it to available longitudinal methods such as FreeSurfer, SPM12, QUARC, TBM, and KNBSI, and demonstrate that spatially local temporal regularization yields more consistent rates of change of global structures resulting in better statistical power to detect significant changes over time and between populations. PMID:26301716

  12. Brain vascular image segmentation based on fuzzy local information C-means clustering

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Liu, Xia; Liang, Xiao; Hui, Hui; Yang, Xin; Tian, Jie

    2017-02-01

    Light sheet fluorescence microscopy (LSFM) is a powerful optical resolution fluorescence microscopy technique which enables to observe the mouse brain vascular network in cellular resolution. However, micro-vessel structures are intensity inhomogeneity in LSFM images, which make an inconvenience for extracting line structures. In this work, we developed a vascular image segmentation method by enhancing vessel details which should be useful for estimating statistics like micro-vessel density. Since the eigenvalues of hessian matrix and its sign describes different geometric structure in images, which enable to construct vascular similarity function and enhance line signals, the main idea of our method is to cluster the pixel values of the enhanced image. Our method contained three steps: 1) calculate the multiscale gradients and the differences between eigenvalues of Hessian matrix. 2) In order to generate the enhanced microvessels structures, a feed forward neural network was trained by 2.26 million pixels for dealing with the correlations between multi-scale gradients and the differences between eigenvalues. 3) The fuzzy local information c-means clustering (FLICM) was used to cluster the pixel values in enhance line signals. To verify the feasibility and effectiveness of this method, mouse brain vascular images have been acquired by a commercial light-sheet microscope in our lab. The experiment of the segmentation method showed that dice similarity coefficient can reach up to 85%. The results illustrated that our approach extracting line structures of blood vessels dramatically improves the vascular image and enable to accurately extract blood vessels in LSFM images.

  13. Microbleed detection using automated segmentation (MIDAS): a new method applicable to standard clinical MR images.

    PubMed

    Seghier, Mohamed L; Kolanko, Magdalena A; Leff, Alexander P; Jäger, Hans R; Gregoire, Simone M; Werring, David J

    2011-03-23

    Cerebral microbleeds, visible on gradient-recalled echo (GRE) T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS) and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS) identified cerebral microbleeds by explicitly incorporating an "extra" tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC) and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts). Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87). MIDAS successfully detected all patients with multiple (≥2) lobar microbleeds. MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds.

  14. Detection of statistical asymmetries in non-stationary sign time series: Analysis of foreign exchange data

    PubMed Central

    Takayasu, Hideki; Takayasu, Misako

    2017-01-01

    We extend the concept of statistical symmetry as the invariance of a probability distribution under transformation to analyze binary sign time series data of price difference from the foreign exchange market. We model segments of the sign time series as Markov sequences and apply a local hypothesis test to evaluate the symmetries of independence and time reversion in different periods of the market. For the test, we derive the probability of a binary Markov process to generate a given set of number of symbol pairs. Using such analysis, we could not only segment the time series according the different behaviors but also characterize the segments in terms of statistical symmetries. As a particular result, we find that the foreign exchange market is essentially time reversible but this symmetry is broken when there is a strong external influence. PMID:28542208

  15. Fully automated segmentation of the pectoralis muscle boundary in breast MR images

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Filippatos, Konstantinos; Friman, Ola; Hahn, Horst K.

    2011-03-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) of the breast is emerging as a novel tool for early tumor detection and diagnosis. The segmentation of the structures in breast DCE-MR images, such as the nipple, the breast-air boundary and the pectoralis muscle, serves as a fundamental step for further computer assisted diagnosis (CAD) applications, e.g. breast density analysis. Moreover, the previous clinical studies show that the distance between the posterior breast lesions and the pectoralis muscle can be used to assess the extent of the disease. To enable automatic quantification of the distance from a breast tumor to the pectoralis muscle, a precise delineation of the pectoralis muscle boundary is required. We present a fully automatic segmentation method based on the second derivative information represented by the Hessian matrix. The voxels proximal to the pectoralis muscle boundary exhibit roughly the same Eigen value patterns as a sheet-like object in 3D, which can be enhanced and segmented by a Hessian-based sheetness filter. A vector-based connected component filter is then utilized such that only the pectoralis muscle is preserved by extracting the largest connected component. The proposed method was evaluated quantitatively with a test data set which includes 30 breast MR images by measuring the average distances between the segmented boundary and the annotated surfaces in two ground truth sets, and the statistics showed that the mean distance was 1.434 mm with the standard deviation of 0.4661 mm, which shows great potential for integration of the approach in the clinical routine.

  16. Ventriculogram segmentation using boosted decision trees

    NASA Astrophysics Data System (ADS)

    McDonald, John A.; Sheehan, Florence H.

    2004-05-01

    Left ventricular status, reflected in ejection fraction or end systolic volume, is a powerful prognostic indicator in heart disease. Quantitative analysis of these and other parameters from ventriculograms (cine xrays of the left ventricle) is infrequently performed due to the labor required for manual segmentation. None of the many methods developed for automated segmentation has achieved clinical acceptance. We present a method for semi-automatic segmentation of ventriculograms based on a very accurate two-stage boosted decision-tree pixel classifier. The classifier determines which pixels are inside the ventricle at key ED (end-diastole) and ES (end-systole) frames. The test misclassification rate is about 1%. The classifier is semi-automatic, requiring a user to select 3 points in each frame: the endpoints of the aortic valve and the apex. The first classifier stage is 2 boosted decision-trees, trained using features such as gray-level statistics (e.g. median brightness) and image geometry (e.g. coordinates relative to user supplied 3 points). Second stage classifiers are trained using the same features as the first, plus the output of the first stage. Border pixels are determined from the segmented images using dilation and erosion. A curve is then fit to the border pixels, minimizing a penalty function that trades off fidelity to the border pixels with smoothness. ED and ES volumes, and ejection fraction are estimated from border curves using standard area-length formulas. On independent test data, the differences between automatic and manual volumes (and ejection fractions) are similar in size to the differences between two human observers.

  17. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less

  18. Cracking the Language Code: Neural Mechanisms Underlying Speech Parsing

    PubMed Central

    McNealy, Kristin; Mazziotta, John C.; Dapretto, Mirella

    2013-01-01

    Word segmentation, detecting word boundaries in continuous speech, is a critical aspect of language learning. Previous research in infants and adults demonstrated that a stream of speech can be readily segmented based solely on the statistical and speech cues afforded by the input. Using functional magnetic resonance imaging (fMRI), the neural substrate of word segmentation was examined on-line as participants listened to three streams of concatenated syllables, containing either statistical regularities alone, statistical regularities and speech cues, or no cues. Despite the participants’ inability to explicitly detect differences between the speech streams, neural activity differed significantly across conditions, with left-lateralized signal increases in temporal cortices observed only when participants listened to streams containing statistical regularities, particularly the stream containing speech cues. In a second fMRI study, designed to verify that word segmentation had implicitly taken place, participants listened to trisyllabic combinations that occurred with different frequencies in the streams of speech they just heard (“words,” 45 times; “partwords,” 15 times; “nonwords,” once). Reliably greater activity in left inferior and middle frontal gyri was observed when comparing words with partwords and, to a lesser extent, when comparing partwords with nonwords. Activity in these regions, taken to index the implicit detection of word boundaries, was positively correlated with participants’ rapid auditory processing skills. These findings provide a neural signature of on-line word segmentation in the mature brain and an initial model with which to study developmental changes in the neural architecture involved in processing speech cues during language learning. PMID:16855090

  19. 2D versus 3D in the kinematic analysis of the horse at the trot.

    PubMed

    Miró, F; Santos, R; Garrido-Castro, J L; Galisteo, A M; Medina-Carnicer, R

    2009-08-01

    The handled trot of three Lusitano Purebred stallions was analyzed by using 2D and 3D kinematical analysis methods. Using the same capture and analysis system, 2D and 3D data of some linear (stride length, maximal height of the hoof trajectories) and angular (angular range of motion, inclination of bone segments) variables were obtained. A paired Student T-test was performed in order to detect statistically significant differences between data resulting from the two methodologies With respect to the angular variables, there were significant differences in scapula inclination, shoulder angle, cannon inclination and protraction-retraction angle in the forelimb variables, but none of them were statistically different in the hind limb. Differences between the two methods were found in most of the linear variables analyzed.

  20. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  1. Texture analysis with statistical methods for wheat ear extraction

    NASA Astrophysics Data System (ADS)

    Bakhouche, M.; Cointault, F.; Gouton, P.

    2007-01-01

    In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.

  2. A new method for detecting small and dim targets in starry background

    NASA Astrophysics Data System (ADS)

    Yao, Rui; Zhang, Yanning; Jiang, Lei

    2011-08-01

    Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.

  3. Morphological image analysis for classification of gastrointestinal tissues using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Garcia-Allende, P. Beatriz; Amygdalos, Iakovos; Dhanapala, Hiruni; Goldin, Robert D.; Hanna, George B.; Elson, Daniel S.

    2012-01-01

    Computer-aided diagnosis of ophthalmic diseases using optical coherence tomography (OCT) relies on the extraction of thickness and size measures from the OCT images, but such defined layers are usually not observed in emerging OCT applications aimed at "optical biopsy" such as pulmonology or gastroenterology. Mathematical methods such as Principal Component Analysis (PCA) or textural analyses including both spatial textural analysis derived from the two-dimensional discrete Fourier transform (DFT) and statistical texture analysis obtained independently from center-symmetric auto-correlation (CSAC) and spatial grey-level dependency matrices (SGLDM), as well as, quantitative measurements of the attenuation coefficient have been previously proposed to overcome this problem. We recently proposed an alternative approach consisting of a region segmentation according to the intensity variation along the vertical axis and a pure statistical technology for feature quantification. OCT images were first segmented in the axial direction in an automated manner according to intensity. Afterwards, a morphological analysis of the segmented OCT images was employed for quantifying the features that served for tissue classification. In this study, a PCA processing of the extracted features is accomplished to combine their discriminative power in a lower number of dimensions. Ready discrimination of gastrointestinal surgical specimens is attained demonstrating that the approach further surpasses the algorithms previously reported and is feasible for tissue classification in the clinical setting.

  4. CONCENTRIC DECILE SEGMENTATION OF WHITE AND HYPOPIGMENTED AREAS IN DERMOSCOPY IMAGES OF SKIN LESIONS ALLOWS DISCRIMINATION OF MALIGNANT MELANOMA

    PubMed Central

    Dalal, Ankur; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Gupta, Kapil; Calcara, David A.; Xu, Jin; Shrestha, Bijaya; Drugge, Rhett; Malters, Joseph M.; Perry, Lindall A.

    2011-01-01

    Dermoscopy, also known as dermatoscopy or epiluminescence microscopy (ELM), permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. White areas, prominent in early malignant melanoma and melanoma in situ, contribute to early detection of these lesions. An adaptive detection method has been investigated to identify white and hypopigmented areas based on lesion histogram statistics. Using the Euclidean distance transform, the lesion is segmented in concentric deciles. Overlays of the white areas on the lesion deciles are determined. Calculated features of automatically detected white areas include lesion decile ratios, normalized number of white areas, absolute and relative size of largest white area, relative size of all white areas, and white area eccentricity, dispersion, and irregularity. Using a back-propagation neural network, the white area statistics yield over 95% diagnostic accuracy of melanomas from benign nevi. White and hypopigmented areas in melanomas tend to be central or paracentral. The four most powerful features on multivariate analysis are lesion decile ratios. Automatic detection of white and hypopigmented areas in melanoma can be accomplished using lesion statistics. A neural network can achieve good discrimination of melanomas from benign nevi using these areas. Lesion decile ratios are useful white area features. PMID:21074971

  5. Segmentation of prostate boundaries from ultrasound images using statistical shape model.

    PubMed

    Shen, Dinggang; Zhan, Yiqiang; Davatzikos, Christos

    2003-04-01

    This paper presents a statistical shape model for the automatic prostate segmentation in transrectal ultrasound images. A Gabor filter bank is first used to characterize the prostate boundaries in ultrasound images in both multiple scales and multiple orientations. The Gabor features are further reconstructed to be invariant to the rotation of the ultrasound probe and incorporated in the prostate model as image attributes for guiding the deformable segmentation. A hierarchical deformation strategy is then employed, in which the model adaptively focuses on the similarity of different Gabor features at different deformation stages using a multiresolution technique, i.e., coarse features first and fine features later. A number of successful experiments validate the algorithm.

  6. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  8. HealthStyles: a new psychographic segmentation system for health care marketers.

    PubMed

    Endresen, K W; Wintz, J C

    1988-01-01

    HealthStyles is a new psychographic segmentation system specifically designed for the health care industry. This segmentation system goes beyond traditional geographic and demographic analysis and examines health-related consumer attitudes and behaviors. Four statistically distinct "styles" of consumer health care preferences have been identified. The profiles of the four groups have substantial marketing implications in terms of design and promotion of products and services. Each segment of consumers also has differing expectations of physician behavior.

  9. Comparative study of Hsp27, GSK3β, Wnt1 and PRDX3 in Hirschsprung's disease.

    PubMed

    Gao, Hong; Liu, Xiaomei; Chen, Dong; Lv, Liangying; Wu, Mei; Mi, Jie; Wang, Weilin

    2014-06-01

    Hirschsprung's disease (HSCR) is a developmental disorder of the enteric nervous system characterized by aganglionosis in distal gut. In this study, we used two-dimensional gel electrophoresis (2-DE) technology coupled with matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) analysis to identify differentially expressed proteins in the aganglionic (stenotic) and ganglionic (normal) colon segment tissues from patients with HSCR. We identified 15 proteins with different expression levels between the stenotic and the normal colon segment tissues from patients with HSCR. Nine proteins were upregulated and six proteins downregulated in the stenotic colon segment tissues compared to the normal colon segment tissues. Based on the biological functions, we selected the Hsp27 upregulated proteins and the PRDX3 downregulated proteins to confirm their expression in 20 patients. The protein and mRNA expressions of Hsp27 were statistically higher in the stenotic colon segment tissues than in the normal colon segment tissues, whereas the protein and mRNA expressions of PRDX3 were statistically lower in the stenotic colon segment tissues than in the normal colon segment tissues. These findings of changes in mRNA and protein in tissues from patients with HSCR provide information which may be helpful in understanding the pathomechanism that is implicated in the disease. © 2014 The Authors. International Journal of Experimental Pathology © 2014 International Journal of Experimental Pathology.

  10. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  11. [Comparative analysis of variable regions in the genomes of variola virus].

    PubMed

    Babkin, I V; Nepomniashchikh, T S; Maksiutov, R A; Gutorov, V V; Babkina, I N; Shchelkunov, S N

    2008-01-01

    Nucleotide sequences of two extended segments of the terminal variable regions in variola virus genome were determined. The size of the left segment was 13.5 kbp and of the right, 10.5 kbp. Totally, over 540 kbp were sequenced for 22 variola virus strains. The conducted phylogenetic analysis and the data published earlier allowed us to find the interrelations between 70 variola virus isolates, the character of their clustering, and the degree of intergroup and intragroup variations of the clusters of variola virus strains. The most polymorphic loci of the genome segments studied were determined. It was demonstrated that that these loci are localized to either noncoding genome regions or to the regions of destroyed open reading frames, characteristic of the ancestor virus. These loci are promising for development of the strategy for genotyping variola virus strains. Analysis of recombination using various methods demonstrated that, with the only exception, no statistically significant recombinational events in the genomes of variola virus strains studied were detectable.

  12. Hippocampal volume change measurement: quantitative assessment of the reproducibility of expert manual outlining and the automated methods FreeSurfer and FIRST.

    PubMed

    Mulder, Emma R; de Jong, Remko A; Knol, Dirk L; van Schijndel, Ronald A; Cover, Keith S; Visser, Pieter J; Barkhof, Frederik; Vrenken, Hugo

    2014-05-15

    To measure hippocampal volume change in Alzheimer's disease (AD) or mild cognitive impairment (MCI), expert manual delineation is often used because of its supposed accuracy. It has been suggested that expert outlining yields poorer reproducibility as compared to automated methods, but this has not been investigated. To determine the reproducibilities of expert manual outlining and two common automated methods for measuring hippocampal atrophy rates in healthy aging, MCI and AD. From the Alzheimer's Disease Neuroimaging Initiative (ADNI), 80 subjects were selected: 20 patients with AD, 40 patients with mild cognitive impairment (MCI) and 20 healthy controls (HCs). Left and right hippocampal volume change between baseline and month-12 visit was assessed by using expert manual delineation, and by the automated software packages FreeSurfer (longitudinal processing stream) and FIRST. To assess reproducibility of the measured hippocampal volume change, both back-to-back (BTB) MPRAGE scans available for each visit were analyzed. Hippocampal volume change was expressed in μL, and as a percentage of baseline volume. Reproducibility of the 1-year hippocampal volume change was estimated from the BTB measurements by using linear mixed model to calculate the limits of agreement (LoA) of each method, reflecting its measurement uncertainty. Using the delta method, approximate p-values were calculated for the pairwise comparisons between methods. Statistical analyses were performed both with inclusion and exclusion of visibly incorrect segmentations. Visibly incorrect automated segmentation in either one or both scans of a longitudinal scan pair occurred in 7.5% of the hippocampi for FreeSurfer and in 6.9% of the hippocampi for FIRST. After excluding these failed cases, reproducibility analysis for 1-year percentage volume change yielded LoA of ±7.2% for FreeSurfer, ±9.7% for expert manual delineation, and ±10.0% for FIRST. Methods ranked the same for reproducibility of 1-year μL volume change, with LoA of ±218 μL for FreeSurfer, ±319 μL for expert manual delineation, and ±333 μL for FIRST. Approximate p-values indicated that reproducibility was better for FreeSurfer than for manual or FIRST, and that manual and FIRST did not differ. Inclusion of failed automated segmentations led to worsening of reproducibility of both automated methods for 1-year raw and percentage volume change. Quantitative reproducibility values of 1-year microliter and percentage hippocampal volume change were roughly similar between expert manual outlining, FIRST and FreeSurfer, but FreeSurfer reproducibility was statistically significantly superior to both manual outlining and FIRST after exclusion of failed segmentations. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.

    PubMed

    Fukurai, H

    1991-01-01

    This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.

  14. Inferring action structure and causal relationships in continuous sequences of human action.

    PubMed

    Buchsbaum, Daphna; Griffiths, Thomas L; Plunkett, Dillon; Gopnik, Alison; Baldwin, Dare

    2015-02-01

    In the real world, causal variables do not come pre-identified or occur in isolation, but instead are embedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variables for causal inference. A specific instance of this problem is action segmentation: dividing a sequence of observed behavior into meaningful actions, and determining which of those actions lead to effects in the world. Here we present a Bayesian analysis of how statistical and causal cues to segmentation should optimally be combined, as well as four experiments investigating human action segmentation and causal inference. We find that both people and our model are sensitive to statistical regularities and causal structure in continuous action, and are able to combine these sources of information in order to correctly infer both causal relationships and segmentation boundaries. Copyright © 2014. Published by Elsevier Inc.

  15. Influence of meteorological conditions on hospital admission in patients with acute coronary syndrome with and without ST-segment elevation: Results of the AIRACOS study.

    PubMed

    Dominguez-Rodriguez, A; Juarez-Prera, R A; Rodríguez, S; Abreu-Gonzalez, P; Avanzas, P

    2016-05-01

    Evaluate whether the meterological parameters affecting revenues in patients with ST-segment and non-ST-segment elevation ACS. A prospective cohort study was carried out. Coronary Care Unit of Hospital Universitario de Canarias We studies a total of 307 consecutive patients with a diagnosis of ST-segment and non-ST-segment elevation ACS. We analyze the average concentrations of particulate smaller than 10 and 2.5μm diameter, particulate black carbon, the concentrations of gaseous pollutants and meteorological parameters (wind speed, temperature, relative humidity and atmospheric pressure) that were exposed patients from one day up to 7 days prior to admission. None. Demographic, clinical, atmospheric particles, concentrations of gaseous pollutants and meterological parameters. A total of 138 (45%) patients were classified as ST-segment and 169 (55%) as non-ST-segment elevation ACS. No statistically significant differences in exposure to atmospheric particles in both groups. Regarding meteorological data, we did not find statistically significant differences, except for higher atmospheric pressure in ST-segment elevation ACS (999.6±2.6 vs. 998.8±2.5 mbar, P=.008). Multivariate analysis showed that atmospheric pressure was significant predictor of ST-segment elevation ACS presentation (OR: 1.14, 95% CI: 1.04-1.24, P=.004). In the patients who suffer ACS, the presence of higher number of atmospheric pressure during the week before the event increase the risk that the ST-segment elevation ACS. Copyright © 2015 Elsevier España, S.L.U. and SEMICYUC. All rights reserved.

  16. A quantitative study of nanoparticle skin penetration with interactive segmentation.

    PubMed

    Lee, Onseok; Lee, See Hyun; Jeong, Sang Hoon; Kim, Jaeyoung; Ryu, Hwa Jung; Oh, Chilhwan; Son, Sang Wook

    2016-10-01

    In the last decade, the application of nanotechnology techniques has expanded within diverse areas such as pharmacology, medicine, and optical science. Despite such wide-ranging possibilities for implementation into practice, the mechanisms behind nanoparticle skin absorption remain unknown. Moreover, the main mode of investigation has been qualitative analysis. Using interactive segmentation, this study suggests a method of objectively and quantitatively analyzing the mechanisms underlying the skin absorption of nanoparticles. Silica nanoparticles (SNPs) were assessed using transmission electron microscopy and applied to the human skin equivalent model. Captured fluorescence images of this model were used to evaluate degrees of skin penetration. These images underwent interactive segmentation and image processing in addition to statistical quantitative analyses of calculated image parameters including the mean, integrated density, skewness, kurtosis, and area fraction. In images from both groups, the distribution area and intensity of fluorescent silica gradually increased in proportion to time. Since statistical significance was achieved after 2 days in the negative charge group and after 4 days in the positive charge group, there is a periodic difference. Furthermore, the quantity of silica per unit area showed a dramatic change after 6 days in the negative charge group. Although this quantitative result is identical to results obtained by qualitative assessment, it is meaningful in that it was proven by statistical analysis with quantitation by using image processing. The present study suggests that the surface charge of SNPs could play an important role in the percutaneous absorption of NPs. These findings can help achieve a better understanding of the percutaneous transport of NPs. In addition, these results provide important guidance for the design of NPs for biomedical applications.

  17. Active contours on statistical manifolds and texture segmentation

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2005-01-01

    A new approach to active contours on statistical manifolds is presented. The statistical manifolds are 2- dimensional Riemannian manifolds that are statistically defined by maps that transform a parameter domain onto a set of probability density functions. In this novel framework, color or texture features are measured at each image point and their statistical...

  18. A mathematical analysis to address the 6 degree-of-freedom segmental power imbalance.

    PubMed

    Ebrahimi, Anahid; Collins, John D; Kepple, Thomas M; Takahashi, Kota Z; Higginson, Jill S; Stanhope, Steven J

    2018-01-03

    Segmental power is used in human movement analyses to indicate the source and net rate of energy transfer between the rigid bodies of biomechanical models. Segmental power calculations are performed using segment endpoint dynamics (kinetic method). A theoretically equivalent method is to measure the rate of change in a segment's mechanical energy state (kinematic method). However, these two methods have not produced experimentally equivalent results for segments proximal to the foot, with the difference in methods deemed the "power imbalance." In a 6 degree-of-freedom model, segments move independently, resulting in relative segment endpoint displacement and non-equivalent segment endpoint velocities at a joint. In the kinetic method, a segment's distal end translational velocity may be defined either at the anatomical end of the segment or at the location of the joint center (defined here as the proximal end of the adjacent distal segment). Our mathematical derivations revealed the power imbalance between the kinetic method using the anatomical definition and the kinematic method can be explained by power due to relative segment endpoint displacement. In this study, we tested this analytical prediction through experimental gait data from nine healthy subjects walking at a typical speed. The average absolute segmental power imbalance was reduced from 0.023 to 0.046 W/kg using the anatomical definition to ≤0.001 W/kg using the joint center definition in the kinetic method (95.56-98.39% reduction). Power due to relative segment endpoint displacement in segmental power analyses is substantial and should be considered in analyzing energetic flow into and between segments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. [Fixed appliance therapy in patients with impaired short-circuit in the anterior part of the maxilla].

    PubMed

    Matthews-Brzozowska, Teresa; Pobol-Aidi, Małgorzata; Cudziło, Dorota

    2015-03-01

    Malocclusion in the anterior segment of maxilla and mandible are easily visible not only for dentists but also for the doctors of other specialties. Early diagnosis and appropriate therapy is important not only for occlusion but also for aesthetic reasons. The aim of the paper is to evaluate the anterior segment of maxilla and mandible in patients with malocclusion in this part and correct occlusion in the lateral segments. Medical documentation, i.e. medical history, extra- and intraoral radiograms, diagnostic casts, panoramic and lateral cephalometric radiograms of patients aged 7-12 diagnosed with malocclusion in the anterior segment of maxilla and mandible and who were treated with a fixed sectional appliance and facemask was analyzed. Descriptive and cephalometric features were analyzed before (T1) and after (T2) the treatment in 25 children. The differences between the status before and after the treatment, and the extent of change between T1 and T2 were analyzed. Statistical analysis of mean values of selected metrical features before (at T1) and after (at T2) the treatment has revealed that all metrical features concerning soft, bony and dental tissues determining the facial profile, the shape of the bony and dental structures have changed and have reached values which are closer to the norm for the population for selected features. The changes were statistically significant (p<0.0001). Treatment with fixed appliances segment facemask resulted in statistically significant improvement in the parameters investigated, which demonstrates the applicability of this therapy in the treatment of anterior maxillary segment in patients with mixed dentition. © 2015 MEDPRESS.

  20. The Correlation between Insertion Depth of Prodisc-C Artificial Disc and Postoperative Kyphotic Deformity: Clinical Importance of Insertion Depth of Artificial Disc.

    PubMed

    Lee, Do-Youl; Kim, Se-Hoon; Suh, Jung-Keun; Cho, Tai-Hyoung; Chung, Yong-Gu

    2012-09-01

    This study was designed to investigate the correlation between insertion depth of artificial disc and postoperative kyphotic deformity after Prodisc-C total disc replacement surgery, and the range of artificial disc insertion depth which is effective in preventing postoperative whole cervical or segmental kyphotic deformity. A retrospective radiological analysis was performed in 50 patients who had undergone single level total disc replacement surgery. Records were reviewed to obtain demographic data. Preoperative and postoperative radiographs were assessed to determine C2-7 Cobb's angle and segmental angle and to investigate postoperative kyphotic deformity. A formula was introduced to calculate insertion depth of Prodisc-C artificial disc. Statistical analysis was performed to search the correlation between insertion depth of Prodisc-C artificial disc and postoperative kyphotic deformity, and to estimate insertion depth of Prodisc-C artificial disc to prevent postoperative kyphotic deformity. In this study no significant statistical correlation was observed between insertion depth of Prodisc-C artificial disc and postoperative kyphotic deformity regarding C2-7 Cobb's angle. Statistical correlation between insertion depth of Prodisc-C artificial disc and postoperative kyphotic deformity was observed regarding segmental angle (p<0.05). It failed to estimate proper insertion depth of Prodisc-C artificial disc effective in preventing postoperative kyphotic deformity. Postoperative segmental kyphotic deformity is associated with insertion depth of Prodisc-C artificial disc. Anterior located artificial disc leads to lordotic segmental angle and posterior located artificial disc leads to kyphotic segmental angle postoperatively. But C2-7 Cobb's angle is not affected by artificial disc location after the surgery.

  1. Automatically measuring brain ventricular volume within PACS using artificial intelligence.

    PubMed

    Yepes-Calderon, Fernando; Nelson, Marvin D; McComb, J Gordon

    2018-01-01

    The picture archiving and communications system (PACS) is currently the standard platform to manage medical images but lacks analytical capabilities. Staying within PACS, the authors have developed an automatic method to retrieve the medical data and access it at a voxel level, decrypted and uncompressed that allows analytical capabilities while not perturbing the system's daily operation. Additionally, the strategy is secure and vendor independent. Cerebral ventricular volume is important for the diagnosis and treatment of many neurological disorders. A significant change in ventricular volume is readily recognized, but subtle changes, especially over longer periods of time, may be difficult to discern. Clinical imaging protocols and parameters are often varied making it difficult to use a general solution with standard segmentation techniques. Presented is a segmentation strategy based on an algorithm that uses four features extracted from the medical images to create a statistical estimator capable of determining ventricular volume. When compared with manual segmentations, the correlation was 94% and holds promise for even better accuracy by incorporating the unlimited data available. The volume of any segmentable structure can be accurately determined utilizing the machine learning strategy presented and runs fully automatically within the PACS.

  2. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; King, J.; Keiser, Jr., D.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  3. CsSNP: A Web-Based Tool for the Detecting of Comparative Segments SNPs.

    PubMed

    Wang, Yi; Wang, Shuangshuang; Zhou, Dongjie; Yang, Shuai; Xu, Yongchao; Yang, Chao; Yang, Long

    2016-07-01

    SNP (single nucleotide polymorphism) is a popular tool for the study of genetic diversity, evolution, and other areas. Therefore, it is necessary to develop a convenient, utility, robust, rapid, and open source detecting-SNP tool for all researchers. Since the detection of SNPs needs special software and series steps including alignment, detection, analysis and present, the study of SNPs is limited for nonprofessional users. CsSNP (Comparative segments SNP, http://biodb.sdau.edu.cn/cssnp/ ) is a freely available web tool based on the Blat, Blast, and Perl programs to detect comparative segments SNPs and to show the detail information of SNPs. The results are filtered and presented in the statistics figure and a Gbrowse map. This platform contains the reference genomic sequences and coding sequences of 60 plant species, and also provides new opportunities for the users to detect SNPs easily. CsSNP is provided a convenient tool for nonprofessional users to find comparative segments SNPs in their own sequences, and give the users the information and the analysis of SNPs, and display these data in a dynamic map. It provides a new method to detect SNPs and may accelerate related studies.

  4. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  5. Adapting Active Shape Models for 3D segmentation of tubular structures in medical images.

    PubMed

    de Bruijne, Marleen; van Ginneken, Bram; Viergever, Max A; Niessen, Wiro J

    2003-07-01

    Active Shape Models (ASM) have proven to be an effective approach for image segmentation. In some applications, however, the linear model of gray level appearance around a contour that is used in ASM is not sufficient for accurate boundary localization. Furthermore, the statistical shape model may be too restricted if the training set is limited. This paper describes modifications to both the shape and the appearance model of the original ASM formulation. Shape model flexibility is increased, for tubular objects, by modeling the axis deformation independent of the cross-sectional deformation, and by adding supplementary cylindrical deformation modes. Furthermore, a novel appearance modeling scheme that effectively deals with a highly varying background is developed. In contrast with the conventional ASM approach, the new appearance model is trained on both boundary and non-boundary points, and the probability that a given point belongs to the boundary is estimated non-parametrically. The methods are evaluated on the complex task of segmenting thrombus in abdominal aortic aneurysms (AAA). Shape approximation errors were successfully reduced using the two shape model extensions. Segmentation using the new appearance model significantly outperformed the original ASM scheme; average volume errors are 5.1% and 45% respectively.

  6. Steganalysis based on reducing the differences of image statistical characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao

    2018-04-01

    Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.

  7. Wheel load cycle tag for rail : final report.

    DOT National Transportation Integrated Search

    2015-12-01

    The Federal Railroad Administration (FRA) has determined that there is a research need to collect and analyze statistical usage : data to help ascertain the cumulative load-induced fatigue on rail track segments. The estimation of rail segment burden...

  8. Assessing the effects of lumbar posterior stabilization and fusion to vertebral bone density in stabilized and adjacent segments by using Hounsfield unit

    PubMed Central

    Öksüz, Erol; Deniz, Fatih Ersay; Demir, Osman

    2017-01-01

    Background Computed tomography (CT) with Hounsfield unit (HU) is being used with increasing frequency for determining bone density. Established correlations between HU and bone density have been shown in the literature. The aim of this retrospective study was to determine the bone density changes of the stabilized and adjacent segment vertebral bodies by comparing HU values before and after lumbar posterior stabilization. Methods Sixteen patients who had similar diagnosis of lumbar spondylosis and stenosis were evaluated in this study. Same surgical procedures were performed to all of the patients with L2-3-4-5 transpedicular screw fixation, fusion and L3-4 total laminectomy. Bone mineral density measurements were obtained with clinical CT. Measurements were obtained from stabilized and adjacent segment vertebral bodies. Densities of vertebral bodies were evaluated with HU before the surgeries and approximately one year after the surgeries. The preoperative HU value of each vertebra was compared with postoperative HU value of the same vertebrae by using statistical analysis. Results The HU values of vertebra in the stabilized and adjacent segments consistently decreased after the operations. There were significant differences between the preoperative HU values and the postoperative HU values of the all evaluated vertebral bodies in the stabilized and adjacent segments. Additionally first sacral vertebra HU values were found to be significantly higher than lumbar vertebra HU values in the preoperative group and postoperative group. Conclusions Decrease in the bone density of the adjacent segment vertebral bodies may be one of the major predisposing factors for adjacent segment disease (ASD). PMID:29354730

  9. Metric Learning to Enhance Hyperspectral Image Segmentation

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Castano, Rebecca; Bue, Brian; Gilmore, Martha S.

    2013-01-01

    Unsupervised hyperspectral image segmentation can reveal spatial trends that show the physical structure of the scene to an analyst. They highlight borders and reveal areas of homogeneity and change. Segmentations are independently helpful for object recognition, and assist with automated production of symbolic maps. Additionally, a good segmentation can dramatically reduce the number of effective spectra in an image, enabling analyses that would otherwise be computationally prohibitive. Specifically, using an over-segmentation of the image instead of individual pixels can reduce noise and potentially improve the results of statistical post-analysis. In this innovation, a metric learning approach is presented to improve the performance of unsupervised hyperspectral image segmentation. The prototype demonstrations attempt a superpixel segmentation in which the image is conservatively over-segmented; that is, the single surface features may be split into multiple segments, but each individual segment, or superpixel, is ensured to have homogenous mineralogy.

  10. Tumor segmentation on FDG-PET: usefulness of locally connected conditional random fields

    NASA Astrophysics Data System (ADS)

    Nishio, Mizuho; Kono, Atsushi K.; Koyama, Hisanobu; Nishii, Tatsuya; Sugimura, Kazuro

    2015-03-01

    This study aimed to develop software for tumor segmentation on 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET). To segment the tumor from the background, we used graph cut, whose segmentation energy was generally divided into two terms: the unary and pairwise terms. Locally connected conditional random fields (LCRF) was proposed for the pairwise term. In LCRF, a three-dimensional cubic window with length L was set for each voxel, and voxels within the window were considered for the pairwise term. To evaluate our method, 64 clinically suspected metastatic bone tumors were tested, which were revealed by FDG-PET. To obtain ground truth, the tumors were manually delineated via consensus of two board-certified radiologists. To compare the LCRF accuracy, other types of segmentation were also applied such as region-growing based on 35%, 40%, and 45% of the tumor maximum standardized uptake value (RG35, RG40, and RG45, respectively), SLIC superpixels (SS), and region-based active contour models (AC). To validate the tumor segmentation accuracy, a dice similarity coefficient (DSC) was calculated between manual segmentation and result of each technique. The DSC difference was tested using the Wilcoxon signed rank test. The mean DSCs of LCRF at L = 3, 5, 7, and 9 were 0.784, 0.801, 0.809, and 0.812, respectively. The mean DSCs of other techniques were RG35, 0.633; RG40, 0.675; RG45, 0.689; SS, 0.709; and AC, 0.758. The DSC differences between LCRF and other techniques were statistically significant (p <0.05). In conclusion, tumor segmentation was more reliably performed with LCRF relative to other techniques.

  11. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures.

    PubMed

    Lim, Issel Anne L; Faria, Andreia V; Li, Xu; Hsu, Johnny T C; Airan, Raag D; Mori, Susumu; van Zijl, Peter C M

    2013-11-15

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a "deep gray matter parcellation map" (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established "white matter parcellation map" (WMPM) from the same subject's T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the "Everything Parcellation Map in Eve Space," also known as the "EvePM." It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting "almost perfect" agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures

    PubMed Central

    Lim, Issel Anne L.; Faria, Andreia V.; Li, Xu; Hsu, Johnny T.C.; Airan, Raag D.; Mori, Susumu; van Zijl, Peter C. M.

    2013-01-01

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a “deep gray matter parcellation map” (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established “white matter parcellation map” (WMPM) from the same subject’s T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the “Everything Parcellation Map in Eve Space,” also known as the “EvePM.” It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting “almost perfect” agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. PMID:23769915

  13. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciller, Carlos, E-mail: carlos.cillerruiz@unil.ch; Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern; Centre d’Imagerie BioMédicale, University of Lausanne, Lausanne

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manualmore » and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.« less

  14. Intervention Therapy for Portal Vein Stenosis/Occlusion After Pediatric Liver Transplantation.

    PubMed

    Gao, Haijun; Wang, Hao; Chen, Guang; Yi, Zhengjia

    2017-04-18

    BACKGROUND The aim of this study was to investigate the outcomes and stent implantation timing of portal vein stenosis intervention after pediatric liver transplantation (pLT). MATERIAL AND METHODS The clinical data of 30 children with post-liver transplantation portal vein stenosis/occlusion (PVS/O) between Jan 2008 and Jun 2015 were retrospectively analyzed. The successfully re-opened cases used balloon angioplasty or stent implantation. SPSS13.0 software was used for statistical analysis and paired t test of the pressure gradient at both ends of the stenosis, diameter and flow rate within the stenosis, platelet count, and albumin in the PVS children before and after balloon angioplasty, with p<0.05 considered as statistically significant. Among the 30 patients, 6 received a stent implant in their first treatment, 22 received balloon angioplasty in their first treatment, and in 2 the re-opening could not be achieved. RESULTS The diameter of the stenotic segment, portal vein velocity, pressure gradient at both ends of the stenosis, and platelet count in these children with portal vein stenosis/occlusion (PVS/O) showed statistically significant differences when comparing values before and after intervention (p<0.05), but albumin showed no statistically significant difference (p>0.05). CONCLUSIONS Intervention therapy for portal vein stenosis after pediatric liver transplantation (pLT-PVS) is a safe and effective treatment, and patients with portal vein torsion, intimal tearing, or long portal vein segment occlusion should undergo stent implantation.

  15. Continuous EEG signal analysis for asynchronous BCI application.

    PubMed

    Hsu, Wei-Yen

    2011-08-01

    In this study, we propose a two-stage recognition system for continuous analysis of electroencephalogram (EEG) signals. An independent component analysis (ICA) and correlation coefficient are used to automatically eliminate the electrooculography (EOG) artifacts. Based on the continuous wavelet transform (CWT) and Student's two-sample t-statistics, active segment selection then detects the location of active segment in the time-frequency domain. Next, multiresolution fractal feature vectors (MFFVs) are extracted with the proposed modified fractal dimension from wavelet data. Finally, the support vector machine (SVM) is adopted for the robust classification of MFFVs. The EEG signals are continuously analyzed in 1-s segments, and every 0.5 second moves forward to simulate asynchronous BCI works in the two-stage recognition architecture. The segment is first recognized as lifted or not in the first stage, and then is classified as left or right finger lifting at stage two if the segment is recognized as lifting in the first stage. Several statistical analyses are used to evaluate the performance of the proposed system. The results indicate that it is a promising system in the applications of asynchronous BCI work.

  16. Effect of extended exposure to frequency-altered feedback on stuttering during reading and monologue.

    PubMed

    Armson, J; Stuart, A

    1998-06-01

    An ABA time series design was used to examine the effect of extended, continuous exposure to frequency-altered auditory feedback (FAF) during an oral reading and monologue task on stuttering frequency and speech rate. Twelve adults who stutter participated. A statistically significant decrease in number of stuttering events, an increase in number of syllables produced, and a decrease in percent stuttering was observed during the experimental segment relative to baseline segments for the oral reading task. In the monologue task, there were no statistically significant differences for the number of stuttering events, number of syllables produced, or percent stuttering between the experimental and baseline segments. Varying individual patterns of response to FAF were evident during the experimental segment of the reading task: a large consistent reduction in stuttering, an initial reduction followed by fluctuations in amount of stuttering, and essentially no change in stuttering frequency. Ten of 12 participants showed no reduction in stuttering frequency during the experimental segment of the monologue task. These findings have ramifications both for the clinical utilization of FAF and for theoretical explanations of fluency-enhancement.

  17. Statistical Learning Is Related to Early Literacy-Related Skills

    ERIC Educational Resources Information Center

    Spencer, Mercedes; Kaschak, Michael P.; Jones, John L.; Lonigan, Christopher J.

    2015-01-01

    It has been demonstrated that statistical learning, or the ability to use statistical information to learn the structure of one's environment, plays a role in young children's acquisition of linguistic knowledge. Although most research on statistical learning has focused on language acquisition processes, such as the segmentation of words from…

  18. Statistical model of laminar structure for atlas-based segmentation of the fetal brain from in utero MR images

    NASA Astrophysics Data System (ADS)

    Habas, Piotr A.; Kim, Kio; Chandramohan, Dharshan; Rousseau, Francois; Glenn, Orit A.; Studholme, Colin

    2009-02-01

    Recent advances in MR and image analysis allow for reconstruction of high-resolution 3D images from clinical in utero scans of the human fetal brain. Automated segmentation of tissue types from MR images (MRI) is a key step in the quantitative analysis of brain development. Conventional atlas-based methods for adult brain segmentation are limited in their ability to accurately delineate complex structures of developing tissues from fetal MRI. In this paper, we formulate a novel geometric representation of the fetal brain aimed at capturing the laminar structure of developing anatomy. The proposed model uses a depth-based encoding of tissue occurrence within the fetal brain and provides an additional anatomical constraint in a form of a laminar prior that can be incorporated into conventional atlas-based EM segmentation. Validation experiments are performed using clinical in utero scans of 5 fetal subjects at gestational ages ranging from 20.5 to 22.5 weeks. Experimental results are evaluated against reference manual segmentations and quantified in terms of Dice similarity coefficient (DSC). The study demonstrates that the use of laminar depth-encoded tissue priors improves both the overall accuracy and precision of fetal brain segmentation. Particular refinement is observed in regions of the parietal and occipital lobes where the DSC index is improved from 0.81 to 0.82 for cortical grey matter, from 0.71 to 0.73 for the germinal matrix, and from 0.81 to 0.87 for white matter.

  19. An Unsupervised Change Detection Method Using Time-Series of PolSAR Images from Radarsat-2 and GaoFen-3.

    PubMed

    Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le

    2018-02-12

    The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.

  20. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    PubMed

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  1. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  2. A supervised learning approach for Crohn's disease detection using higher-order image statistics and a novel shape asymmetry measure.

    PubMed

    Mahapatra, Dwarikanath; Schueffler, Peter; Tielbeek, Jeroen A W; Buhmann, Joachim M; Vos, Franciscus M

    2013-10-01

    Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.

  3. Image segmentation using association rule features.

    PubMed

    Rushing, John A; Ranganath, Heggere; Hinke, Thomas H; Graves, Sara J

    2002-01-01

    A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.

  4. Review methods for image segmentation from computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less

  5. Active shape models unleashed

    NASA Astrophysics Data System (ADS)

    Kirschner, Matthias; Wesarg, Stefan

    2011-03-01

    Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym

  6. Automated identification of the lung contours in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Nery, F.; Silvestre Silva, J.; Ferreira, N. C.; Caramelo, F. J.; Faustino, R.

    2013-03-01

    Positron Emission Tomography (PET) is a nuclear medicine imaging technique that permits to analyze, in three dimensions, the physiological processes in vivo. One of the areas where PET has demonstrated its advantages is in the staging of lung cancer, where it offers better sensitivity and specificity than other techniques such as CT. On the other hand, accurate segmentation, an important procedure for Computer Aided Diagnostics (CAD) and automated image analysis, is a challenging task given the low spatial resolution and the high noise that are intrinsic characteristics of PET images. This work presents an algorithm for the segmentation of lungs in PET images, to be used in CAD and group analysis in a large patient database. The lung boundaries are automatically extracted from a PET volume through the application of a marker-driven watershed segmentation procedure which is robust to the noise. In order to test the effectiveness of the proposed method, we compared the segmentation results in several slices using our approach with the results obtained from manual delineation. The manual delineation was performed by nuclear medicine physicians that used a software routine that we developed specifically for this task. To quantify the similarity between the contours obtained from the two methods, we used figures of merit based on region and also on contour definitions. Results show that the performance of the algorithm was similar to the performance of human physicians. Additionally, we found that the algorithm-physician agreement is similar (statistically significant) to the inter-physician agreement.

  7. Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.

    PubMed

    Hao, J T; Li, M L; Tang, F L

    2008-01-01

    Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.

  8. Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images

    NASA Technical Reports Server (NTRS)

    Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.

    1999-01-01

    Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.

  9. Sequence-independent construction of ordered combinatorial libraries with predefined crossover points.

    PubMed

    Jézéquel, Laetitia; Loeper, Jacqueline; Pompon, Denis

    2008-11-01

    Combinatorial libraries coding for mosaic enzymes with predefined crossover points constitute useful tools to address and model structure-function relationships and for functional optimization of enzymes based on multivariate statistics. The presented method, called sequence-independent generation of a chimera-ordered library (SIGNAL), allows easy shuffling of any predefined amino acid segment between two or more proteins. This method is particularly well adapted to the exchange of protein structural modules. The procedure could also be well suited to generate ordered combinatorial libraries independent of sequence similarities in a robotized manner. Sequence segments to be recombined are first extracted by PCR from a single-stranded template coding for an enzyme of interest using a biotin-avidin-based method. This technique allows the reduction of parental template contamination in the final library. Specific PCR primers allow amplification of two complementary mosaic DNA fragments, overlapping in the region to be exchanged. Fragments are finally reassembled using a fusion PCR. The process is illustrated via the construction of a set of mosaic CYP2B enzymes using this highly modular approach.

  10. An Energy-Based Three-Dimensional Segmentation Approach for the Quantitative Interpretation of Electron Tomograms

    PubMed Central

    Bartesaghi, Alberto; Sapiro, Guillermo; Subramaniam, Sriram

    2006-01-01

    Electron tomography allows for the determination of the three-dimensional structures of cells and tissues at resolutions significantly higher than that which is possible with optical microscopy. Electron tomograms contain, in principle, vast amounts of information on the locations and architectures of large numbers of subcellular assemblies and organelles. The development of reliable quantitative approaches for the analysis of features in tomograms is an important problem, and a challenging prospect due to the low signal-to-noise ratios that are inherent to biological electron microscopic images. This is, in part, a consequence of the tremendous complexity of biological specimens. We report on a new method for the automated segmentation of HIV particles and selected cellular compartments in electron tomograms recorded from fixed, plastic-embedded sections derived from HIV-infected human macrophages. Individual features in the tomogram are segmented using a novel robust algorithm that finds their boundaries as global minimal surfaces in a metric space defined by image features. The optimization is carried out in a transformed spherical domain with the center an interior point of the particle of interest, providing a proper setting for the fast and accurate minimization of the segmentation energy. This method provides tools for the semi-automated detection and statistical evaluation of HIV particles at different stages of assembly in the cells and presents opportunities for correlation with biochemical markers of HIV infection. The segmentation algorithm developed here forms the basis of the automated analysis of electron tomograms and will be especially useful given the rapid increases in the rate of data acquisition. It could also enable studies of much larger data sets, such as those which might be obtained from the tomographic analysis of HIV-infected cells from studies of large populations. PMID:16190467

  11. Bladder Cancer Segmentation in CT for Treatment Response Assessment: Application of Deep-Learning Convolution Neural Network-A Pilot Study.

    PubMed

    Cha, Kenny H; Hadjiiski, Lubomir M; Samala, Ravi K; Chan, Heang-Ping; Cohan, Richard H; Caoili, Elaine M; Paramagul, Chintana; Alva, Ajjai; Weizer, Alon Z

    2016-12-01

    Assessing the response of bladder cancer to neoadjuvant chemotherapy is crucial for reducing morbidity and increasing quality of life of patients. Changes in tumor volume during treatment is generally used to predict treatment outcome. We are developing a method for bladder cancer segmentation in CT using a pilot data set of 62 cases. 65 000 regions of interests were extracted from pre-treatment CT images to train a deep-learning convolution neural network (DL-CNN) for tumor boundary detection using leave-one-case-out cross-validation. The results were compared to our previous AI-CALS method. For all lesions in the data set, the longest diameter and its perpendicular were measured by two radiologists, and 3D manual segmentation was obtained from one radiologist. The World Health Organization (WHO) criteria and the Response Evaluation Criteria In Solid Tumors (RECIST) were calculated, and the prediction accuracy of complete response to chemotherapy was estimated by the area under the receiver operating characteristic curve (AUC). The AUCs were 0.73 ± 0.06, 0.70 ± 0.07, and 0.70 ± 0.06, respectively, for the volume change calculated using DL-CNN segmentation, the AI-CALS and the manual contours. The differences did not achieve statistical significance. The AUCs using the WHO criteria were 0.63 ± 0.07 and 0.61 ± 0.06, while the AUCs using RECIST were 0.65 ± 007 and 0.63 ± 0.06 for the two radiologists, respectively. Our results indicate that DL-CNN can produce accurate bladder cancer segmentation for calculation of tumor size change in response to treatment. The volume change performed better than the estimations from the WHO criteria and RECIST for the prediction of complete response.

  12. Bladder Cancer Segmentation in CT for Treatment Response Assessment: Application of Deep-Learning Convolution Neural Network—A Pilot Study

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Samala, Ravi K.; Chan, Heang-Ping; Cohan, Richard H.; Caoili, Elaine M.; Paramagul, Chintana; Alva, Ajjai; Weizer, Alon Z.

    2017-01-01

    Assessing the response of bladder cancer to neoadjuvant chemotherapy is crucial for reducing morbidity and increasing quality of life of patients. Changes in tumor volume during treatment is generally used to predict treatment outcome. We are developing a method for bladder cancer segmentation in CT using a pilot data set of 62 cases. 65 000 regions of interests were extracted from pre-treatment CT images to train a deep-learning convolution neural network (DL-CNN) for tumor boundary detection using leave-one-case-out cross-validation. The results were compared to our previous AI-CALS method. For all lesions in the data set, the longest diameter and its perpendicular were measured by two radiologists, and 3D manual segmentation was obtained from one radiologist. The World Health Organization (WHO) criteria and the Response Evaluation Criteria In Solid Tumors (RECIST) were calculated, and the prediction accuracy of complete response to chemotherapy was estimated by the area under the receiver operating characteristic curve (AUC). The AUCs were 0.73 ± 0.06, 0.70 ± 0.07, and 0.70 ± 0.06, respectively, for the volume change calculated using DL-CNN segmentation, the AI-CALS and the manual contours. The differences did not achieve statistical significance. The AUCs using the WHO criteria were 0.63 ± 0.07 and 0.61 ± 0.06, while the AUCs using RECIST were 0.65 ± 007 and 0.63 ± 0.06 for the two radiologists, respectively. Our results indicate that DL-CNN can produce accurate bladder cancer segmentation for calculation of tumor size change in response to treatment. The volume change performed better than the estimations from the WHO criteria and RECIST for the prediction of complete response. PMID:28105470

  13. A minimally interactive method to segment enlarged lymph nodes in 3D thoracic CT images using a rotatable spiral-scanning technique

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Moltz, Jan H.; Bornemann, Lars; Hahn, Horst K.

    2012-03-01

    Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging. We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method, a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of 0.48mm, were achieved.

  14. Vectorization of optically sectioned brain microvasculature: learning aids completion of vascular graphs by connecting gaps and deleting open-ended segments.

    PubMed

    Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David

    2012-08-01

    A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Motion-aware stroke volume quantification in 4D PC-MRI data of the human aorta.

    PubMed

    Köhler, Benjamin; Preim, Uta; Grothoff, Matthias; Gutberlet, Matthias; Fischbach, Katharina; Preim, Bernhard

    2016-02-01

    4D PC-MRI enables the noninvasive measurement of time-resolved, three-dimensional blood flow data that allow quantification of the hemodynamics. Stroke volumes are essential to assess the cardiac function and evolution of different cardiovascular diseases. The calculation depends on the wall position and vessel orientation, which both change during the cardiac cycle due to the heart muscle contraction and the pumped blood. However, current systems for the quantitative 4D PC-MRI data analysis neglect the dynamic character and instead employ a static 3D vessel approximation. We quantify differences between stroke volumes in the aorta obtained with and without consideration of its dynamics. We describe a method that uses the approximating 3D segmentation to automatically initialize segmentation algorithms that require regions inside and outside the vessel for each temporal position. This enables the use of graph cuts to obtain 4D segmentations, extract vessel surfaces including centerlines for each temporal position and derive motion information. The stroke volume quantification is compared using measuring planes in static (3D) vessels, planes with fixed angulation inside dynamic vessels (this corresponds to the common 2D PC-MRI) and moving planes inside dynamic vessels. Seven datasets with different pathologies such as aneurysms and coarctations were evaluated in close collaboration with radiologists. Compared to the experts' manual stroke volume estimations, motion-aware quantification performs, on average, 1.57% better than calculations without motion consideration. The mean difference between stroke volumes obtained with the different methods is 7.82%. Automatically obtained 4D segmentations overlap by 85.75% with manually generated ones. Incorporating motion information in the stroke volume quantification yields slight but not statistically significant improvements. The presented method is feasible for the clinical routine, since computation times are low and essential parts run fully automatically. The 4D segmentations can be used for other algorithms as well. The simultaneous visualization and quantification may support the understanding and interpretation of cardiac blood flow.

  16. Interactive contour delineation of organs at risk in radiotherapy: Clinical evaluation on NSCLC patients.

    PubMed

    Dolz, J; Kirişli, H A; Fechter, T; Karnitzki, S; Oehlke, O; Nestle, U; Vermandel, M; Massoptier, L

    2016-05-01

    Accurate delineation of organs at risk (OARs) on computed tomography (CT) image is required for radiation treatment planning (RTP). Manual delineation of OARs being time consuming and prone to high interobserver variability, many (semi-) automatic methods have been proposed. However, most of them are specific to a particular OAR. Here, an interactive computer-assisted system able to segment various OARs required for thoracic radiation therapy is introduced. Segmentation information (foreground and background seeds) is interactively added by the user in any of the three main orthogonal views of the CT volume and is subsequently propagated within the whole volume. The proposed method is based on the combination of watershed transformation and graph-cuts algorithm, which is used as a powerful optimization technique to minimize the energy function. The OARs considered for thoracic radiation therapy are the lungs, spinal cord, trachea, proximal bronchus tree, heart, and esophagus. The method was evaluated on multivendor CT datasets of 30 patients. Two radiation oncologists participated in the study and manual delineations from the original RTP were used as ground truth for evaluation. Delineation of the OARs obtained with the minimally interactive approach was approved to be usable for RTP in nearly 90% of the cases, excluding the esophagus, which segmentation was mostly rejected, thus leading to a gain of time ranging from 50% to 80% in RTP. Considering exclusively accepted cases, overall OARs, a Dice similarity coefficient higher than 0.7 and a Hausdorff distance below 10 mm with respect to the ground truth were achieved. In addition, the interobserver analysis did not highlight any statistically significant difference, at the exception of the segmentation of the heart, in terms of Hausdorff distance and volume difference. An interactive, accurate, fast, and easy-to-use computer-assisted system able to segment various OARs required for thoracic radiation therapy has been presented and clinically evaluated. The introduction of the proposed system in clinical routine may offer valuable new option to radiation oncologists in performing RTP.

  17. Vectorization of optically sectioned brain microvasculature: Learning aids completion of vascular graphs by connecting gaps and deleting open-ended segments

    PubMed Central

    Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David

    2012-01-01

    A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035

  18. Microbleed Detection Using Automated Segmentation (MIDAS): A New Method Applicable to Standard Clinical MR Images

    PubMed Central

    Seghier, Mohamed L.; Kolanko, Magdalena A.; Leff, Alexander P.; Jäger, Hans R.; Gregoire, Simone M.; Werring, David J.

    2011-01-01

    Background Cerebral microbleeds, visible on gradient-recalled echo (GRE) T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. Methodology/Principal Findings Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS) and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS) identified cerebral microbleeds by explicitly incorporating an “extra” tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC) and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts). Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87). MIDAS successfully detected all patients with multiple (≥2) lobar microbleeds. Conclusions/Significance MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds. PMID:21448456

  19. All words are not created equal: Expectations about word length guide infant statistical learning

    PubMed Central

    Lew-Williams, Casey; Saffran, Jenny R.

    2011-01-01

    Infants have been described as ‘statistical learners’ capable of extracting structure (such as words) from patterned input (such as language). Here, we investigated whether prior knowledge influences how infants track transitional probabilities in word segmentation tasks. Are infants biased by prior experience when engaging in sequential statistical learning? In a laboratory simulation of learning across time, we exposed 9- and 10-month-old infants to a list of either bisyllabic or trisyllabic nonsense words, followed by a pause-free speech stream composed of a different set of bisyllabic or trisyllabic nonsense words. Listening times revealed successful segmentation of words from fluent speech only when words were uniformly bisyllabic or trisyllabic throughout both phases of the experiment. Hearing trisyllabic words during the pre-exposure phase derailed infants’ abilities to segment speech into bisyllabic words, and vice versa. We conclude that prior knowledge about word length equips infants with perceptual expectations that facilitate efficient processing of subsequent language input. PMID:22088408

  20. Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case.

    PubMed

    Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco

    2017-05-25

    This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola-Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5 % and 95.4 % for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100 % for both signs. The Viola-Jones approach has detection rates below 20 % for distances between 30 and 48 m, and barely improves in the 20-30 m range with detection rates of up to 60 % . Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems.

  1. Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case

    PubMed Central

    Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco

    2017-01-01

    This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola–Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5% and 95.4% for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100% for both signs. The Viola–Jones approach has detection rates below 20% for distances between 30 and 48 m, and barely improves in the 20–30 m range with detection rates of up to 60%. Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems. PMID:28587071

  2. 3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.

    2014-12-01

    Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.

  3. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  4. Distributioin, orientation and scales of the field-aligned currents measured by Swarm

    NASA Astrophysics Data System (ADS)

    Yang, J.; Dunlop, M. W.

    2016-12-01

    We have statistically studied the R1, R2 and net field aligned currents using the FAC data of the Swarm satellites. We also have investigated the statistical, dual-spacecraft correlations of field-aligned current signatures between two Swarm spacecraft (A and C). For the first time we have inferred the orientations of the current sheets of FACs directly, using the maximum correlations, obtained from sliding data segments, which show clear trends in magnetic local time (MLT). To compare with this we also check the MVAB method. To explore the scale and variability of the current sheet supposition, we investigate the MLT dependence of the maximum correlations in different time shift or longitude shift bins.

  5. Short segment search method for phylogenetic analysis using nested sliding windows

    NASA Astrophysics Data System (ADS)

    Iskandar, A. A.; Bustamam, A.; Trimarsanto, H.

    2017-10-01

    To analyze phylogenetics in Bioinformatics, coding DNA sequences (CDS) segment is needed for maximal accuracy. However, analysis by CDS cost a lot of time and money, so a short representative segment by CDS, which is envelope protein segment or non-structural 3 (NS3) segment is necessary. After sliding window is implemented, a better short segment than envelope protein segment and NS3 is found. This paper will discuss a mathematical method to analyze sequences using nested sliding window to find a short segment which is representative for the whole genome. The result shows that our method can find a short segment which more representative about 6.57% in topological view to CDS segment than an Envelope segment or NS3 segment.

  6. Segmentation of blurred objects using wavelet transform: application to x-ray images

    NASA Astrophysics Data System (ADS)

    Barat, Cecile S.; Ducottet, Christophe; Bilgot, Anne; Desbat, Laurent

    2004-02-01

    First, we present a wavelet-based algorithm for edge detection and characterization, which is an adaptation of Mallat and Hwang"s method. This algorithm relies on a modelization of contours as smoothed singularities of three particular types (transitions, peaks and lines). On the one hand, it allows to detect and locate edges at an adapted scale. On the other hand, it is able to identify the type of each detected edge point and to measure its amplitude and smoothing size. The latter parameters represent respectively the contrast and the smoothness level of the edge point. Second, we explain that this method has been integrated in a 3D bone surface reconstruction algorithm designed for computer-assisted and minimal invasive orthopaedic surgery. In order to decrease the dose to the patient and to obtain rapidly a 3D image, we propose to identify a bone shape from few X-ray projections by using statistical shape models registered to segmented X-ray projections. We apply this approach to pedicle screw insertion (scoliosis, fractures...) where ten to forty percent of the screws are known to be misplaced. In this context, the proposed edge detection algorithm allows to overcome the major problem of vertebrae segmentation in the X-ray images.

  7. Change detection of polarimetric SAR images based on the KummerU Distribution

    NASA Astrophysics Data System (ADS)

    Chen, Quan; Zou, Pengfei; Li, Zhen; Zhang, Ping

    2014-11-01

    In the society of PolSAR image segmentation, change detection and classification, the classical Wishart distribution has been used for a long time, but it especially suit to low-resolution SAR image, because in traditional sensors, only a small number of scatterers are present in each resolution cell. With the improving of SAR systems these years, the classical statistical models can therefore be reconsidered for high resolution and polarimetric information contained in the images acquired by these advanced systems. In this study, SAR image segmentation algorithm based on level-set method, added with distance regularized level-set evolution (DRLSE) is performed using Envisat/ASAR single-polarization data and Radarsat-2 polarimetric images, respectively. KummerU heterogeneous clutter model is used in the later to overcome the homogeneous hypothesis at high resolution cell. An enhanced distance regularized level-set evolution (DRLSE-E) is also applied in the later, to ensure accurate computation and stable level-set evolution. Finally, change detection based on four polarimetric Radarsat-2 time series images is carried out at Genhe area of Inner Mongolia Autonomous Region, NorthEastern of China, where a heavy flood disaster occurred during the summer of 2013, result shows the recommend segmentation method can detect the change of watershed effectively.

  8. Statistical and Variational Methods for Problems in Visual Control

    DTIC Science & Technology

    2009-03-02

    plane curves to round points," /. Differential Geometry 26 (1987), pp. 285-314. 12 [7] S. Haker , G. Sapiro, and A. Tannenbaum, "Knowledge-based...segmentation of SAR data with learned priors," IEEE Trans. Image Processing, vol. 9, pp. 298-302, 2000. [8] S. Haker , L. Zhu, S. Angenent, and A...Tannenbaum, "Optimal mass transport for registration and warping" Int. Journal Computer Vision, vol. 60, pp. 225-240, 2004. [9] S. Haker , G. Sapiro, A

  9. Automated Tumor Volumetry Using Computer-Aided Image Segmentation

    PubMed Central

    Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos

    2015-01-01

    Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633

  10. Estimation of stature from the foot and its segments in a sub-adult female population of North India.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam

    2011-11-21

    Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults.

  11. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets—consisting of 20 and 18 volumes, respectively—provided by the Internet Brain Segmentation Repository.

  13. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction.

    PubMed

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-07

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets-consisting of 20 and 18 volumes, respectively-provided by the Internet Brain Segmentation Repository.

  14. White blood cell segmentation by color-space-based k-means clustering.

    PubMed

    Zhang, Congcong; Xiao, Xiaoyan; Li, Xiaomei; Chen, Ying-Jie; Zhen, Wu; Chang, Jun; Zheng, Chengyun; Liu, Zhi

    2014-09-01

    White blood cell (WBC) segmentation, which is important for cytometry, is a challenging issue because of the morphological diversity of WBCs and the complex and uncertain background of blood smear images. This paper proposes a novel method for the nucleus and cytoplasm segmentation of WBCs for cytometry. A color adjustment step was also introduced before segmentation. Color space decomposition and k-means clustering were combined for segmentation. A database including 300 microscopic blood smear images were used to evaluate the performance of our method. The proposed segmentation method achieves 95.7% and 91.3% overall accuracy for nucleus segmentation and cytoplasm segmentation, respectively. Experimental results demonstrate that the proposed method can segment WBCs effectively with high accuracy.

  15. a New Improved Threshold Segmentation Method for Scanning Images of Reservoir Rocks Considering Pore Fractal Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua

    Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.

  16. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  17. Statistical modelling of subdiffusive dynamics in the cytoplasm of living cells: A FARIMA approach

    NASA Astrophysics Data System (ADS)

    Burnecki, K.; Muszkieta, M.; Sikora, G.; Weron, A.

    2012-04-01

    Golding and Cox (Phys. Rev. Lett., 96 (2006) 098102) tracked the motion of individual fluorescently labelled mRNA molecules inside live E. coli cells. They found that in the set of 23 trajectories from 3 different experiments, the automatically recognized motion is subdiffusive and published an intriguing microscopy video. Here, we extract the corresponding time series from this video by image segmentation method and present its detailed statistical analysis. We find that this trajectory was not included in the data set already studied and has different statistical properties. It is best fitted by a fractional autoregressive integrated moving average (FARIMA) process with the normal-inverse Gaussian (NIG) noise and the negative memory. In contrast to earlier studies, this shows that the fractional Brownian motion is not the best model for the dynamics documented in this video.

  18. Detection of reflecting surfaces by a statistical model

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Chee-Hung H.

    2009-02-01

    Remote sensing is widely used assess the destruction from natural disasters and to plan relief and recovery operations. How to automatically extract useful features and segment interesting objects from digital images, including remote sensing imagery, becomes a critical task for image understanding. Unfortunately, current research on automated feature extraction is ignorant of contextual information. As a result, the fidelity of populating attributes corresponding to interesting features and objects cannot be satisfied. In this paper, we present an exploration on meaningful object extraction integrating reflecting surfaces. Detection of specular reflecting surfaces can be useful in target identification and then can be applied to environmental monitoring, disaster prediction and analysis, military, and counter-terrorism. Our method is based on a statistical model to capture the statistical properties of specular reflecting surfaces. And then the reflecting surfaces are detected through cluster analysis.

  19. Monitoring Statistics Which Have Increased Power over a Reduced Time Range.

    ERIC Educational Resources Information Center

    Tang, S. M.; MacNeill, I. B.

    1992-01-01

    The problem of monitoring trends for changes at unknown times is considered. Statistics that permit one to focus high power on a segment of the monitored period are studied. Numerical procedures are developed to compute the null distribution of these statistics. (Author)

  20. Do statistical segmentation abilities predict lexical-phonological and lexical-semantic abilities in children with and without SLI?

    PubMed Central

    Mainela-Arnold, Elina; Evans, Julia L.

    2014-01-01

    This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included 40 children (ages 8;5–12;3), 20 children with SLI and 20 with typical development. Children completed Saffran’s statistical word segmentation task, a lexical-phonological access task (gating task), and a word definition task. Poor statistical learners were also poor at managing lexical-phonological competition during the gating task. However, statistical learning was not a significant predictor of semantic richness in word definitions. The ability to track statistical sequential regularities may be important for learning the inherently sequential structure of lexical-phonology, but not as important for learning lexical-semantic knowledge. Consistent with the procedural/declarative memory distinction, the brain networks associated with the two types of lexical learning are likely to have different learning properties. PMID:23425593

  1. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  2. Kinetic quantitation of cerebral PET-FDG studies without concurrent blood sampling: statistical recovery of the arterial input function.

    PubMed

    O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A

    2010-03-01

    Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.

  3. Accuracy of a Computer-Aided Surgical Simulation (CASS) Protocol for Orthognathic Surgery: A Prospective Multicenter Study

    PubMed Central

    Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.

    2012-01-01

    Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in pitch and yaw orientations. In the secondary outcome measurements, the RMSD of maxillary dental midline positions was 0.9mm. When registered at the body of the mandible, the linear and angular differences of the chin segment between the groups with and without the use of the chin template were consistent with the results found in the primary outcome measurements. Conclusion Using the CASS protocol, the computerized plan can be accurately and consistently transferred to the patient to position the maxilla and mandible at the time of surgery. The computer-generated chin template provides more accuracy in repositioning the chin segment than the intraoperative measurements. PMID:22695016

  4. Interactive contour delineation of organs at risk in radiotherapy: Clinical evaluation on NSCLC patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolz, J., E-mail: jose.dolz.upv@gmail.com; Kirişli, H. A.; Massoptier, L.

    2016-05-15

    Purpose: Accurate delineation of organs at risk (OARs) on computed tomography (CT) image is required for radiation treatment planning (RTP). Manual delineation of OARs being time consuming and prone to high interobserver variability, many (semi-) automatic methods have been proposed. However, most of them are specific to a particular OAR. Here, an interactive computer-assisted system able to segment various OARs required for thoracic radiation therapy is introduced. Methods: Segmentation information (foreground and background seeds) is interactively added by the user in any of the three main orthogonal views of the CT volume and is subsequently propagated within the whole volume.more » The proposed method is based on the combination of watershed transformation and graph-cuts algorithm, which is used as a powerful optimization technique to minimize the energy function. The OARs considered for thoracic radiation therapy are the lungs, spinal cord, trachea, proximal bronchus tree, heart, and esophagus. The method was evaluated on multivendor CT datasets of 30 patients. Two radiation oncologists participated in the study and manual delineations from the original RTP were used as ground truth for evaluation. Results: Delineation of the OARs obtained with the minimally interactive approach was approved to be usable for RTP in nearly 90% of the cases, excluding the esophagus, which segmentation was mostly rejected, thus leading to a gain of time ranging from 50% to 80% in RTP. Considering exclusively accepted cases, overall OARs, a Dice similarity coefficient higher than 0.7 and a Hausdorff distance below 10 mm with respect to the ground truth were achieved. In addition, the interobserver analysis did not highlight any statistically significant difference, at the exception of the segmentation of the heart, in terms of Hausdorff distance and volume difference. Conclusions: An interactive, accurate, fast, and easy-to-use computer-assisted system able to segment various OARs required for thoracic radiation therapy has been presented and clinically evaluated. The introduction of the proposed system in clinical routine may offer valuable new option to radiation oncologists in performing RTP.« less

  5. Improving nurses' knowledge of continuous ST-segment monitoring.

    PubMed

    Chronister, Connie

    2014-01-01

    Continuous ST-segment monitoring can result in detection of myocardial ischemia, but in clinical practice, continuous ST-segment monitoring is conducted incorrectly and underused by many registered nurses (RNs). Many RNs are unable to correctly institute ST-segment monitoring guidelines because of a lack of education. To evaluate whether an educational intervention, provided to 32 RNs, increases knowledge and correct clinical decision making (CDM) for the use of continuous ST-segment monitoring. At a single institution, an ST-segment monitoring class was provided to RNs in 2 cardiovascular units. Knowledge and correct CDM instruments were used for a baseline pretest and subsequent posttest after ST-segment monitoring education. Statistical significance between pretest and posttest scores for knowledge and correct CDM practice was noted with dependent t tests (P = .0001). Many RNs responsible for electrocardiographic monitoring are not aware of evidence-based ST-segment monitoring practice guidelines and cannot properly place precordial leads needed for ST-segment monitoring. Knowledge and correct CDM with ST-segment monitoring can be improved with focused education.

  6. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  7. RFA-cut: Semi-automatic segmentation of radiofrequency ablation zones with and without needles via optimal s-t-cuts.

    PubMed

    Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Chen, Xiaojun; Hann, Alexander; Boechat, Pedro; Yu, Wei; Freisleben, Bernd; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Schmalstieg, Dieter

    2015-01-01

    In this contribution, we present a semi-automatic segmentation algorithm for radiofrequency ablation (RFA) zones via optimal s-t-cuts. Our interactive graph-based approach builds upon a polyhedron to construct the graph and was specifically designed for computed tomography (CT) acquisitions from patients that had RFA treatments of Hepatocellular Carcinomas (HCC). For evaluation, we used twelve post-interventional CT datasets from the clinical routine and as evaluation metric we utilized the Dice Similarity Coefficient (DSC), which is commonly accepted for judging computer aided medical segmentation tasks. Compared with pure manual slice-by-slice expert segmentations from interventional radiologists, we were able to achieve a DSC of about eighty percent, which is sufficient for our clinical needs. Moreover, our approach was able to handle images containing (DSC=75.9%) and not containing (78.1%) the RFA needles still in place. Additionally, we found no statistically significant difference (p<;0.423) between the segmentation results of the subgroups for a Mann-Whitney test. Finally, to the best of our knowledge, this is the first time a segmentation approach for CT scans including the RFA needles is reported and we show why another state-of-the-art segmentation method fails for these cases. Intraoperative scans including an RFA probe are very critical in the clinical practice and need a very careful segmentation and inspection to avoid under-treatment, which may result in tumor recurrence (up to 40%). If the decision can be made during the intervention, an additional ablation can be performed without removing the entire needle. This decreases the patient stress and associated risks and costs of a separate intervention at a later date. Ultimately, the segmented ablation zone containing the RFA needle can be used for a precise ablation simulation as the real needle position is known.

  8. Automated tumor volumetry using computer-aided image segmentation.

    PubMed

    Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos

    2015-05-01

    Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  9. Application of segmented dental panoramic tomography among children: positive effect of continuing education in radiation protection

    PubMed Central

    Waltimo-Sirén, Janna; Laatikainen, Tuula; Haukka, Jari; Ekholm, Marja

    2016-01-01

    Objectives: Dental panoramic tomography is the most frequent examination among 7–12-year olds, according to the Radiation Safety and Nuclear Authority of Finland. At those ages, dental panoramic tomographs (DPTs) are mostly obtained for orthodontic reasons. Children's dose reduction by trimming the field size to the area of interest is important because of their high radiosensitivity. Yet, the majority of DPTs in this age group are still taken by using an adult programme and never by using a segmented programme. The purpose of the present study was to raise the awareness of dental staff with respect to children's radiation safety, to increase the application of segmented and child DPT programmes by further educating the whole dental team and to evaluate the outcome of the educational intervention. Methods: A five-step intervention programme, focusing on DPT field limitation possibilities, was carried out in community-based dental care as a part of mandatory continuing education in radiation protection. Application of segmented and child DPT programmes was thereafter prospectively followed up during a 1-year period and compared with our similar data from 2010 using a logistic regression analysis. Results: Application of the child programme increased by 9% and the segmented programme by 2%, reaching statistical significance (odds ratios 1.68; 95% confidence interval 1.23–2.30; p-value < 0.001). The number of repeated exposures remained at an acceptable level. The segmented DPTs were most frequently taken from the maxillary lateral incisor–canine area. Conclusions: The educational intervention resulted in improvement of radiological practice in respect to radiation safety of children during dental panoramic tomography. Segmented and child DPT programmes can be applied successfully in dental practice for children. PMID:27142159

  10. A combined learning algorithm for prostate segmentation on 3D CT images.

    PubMed

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei

    2017-11-01

    Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.

  11. Statistical word learning in children with autism spectrum disorder and specific language impairment.

    PubMed

    Haebig, Eileen; Saffran, Jenny R; Ellis Weismer, Susan

    2017-11-01

    Word learning is an important component of language development that influences child outcomes across multiple domains. Despite the importance of word knowledge, word-learning mechanisms are poorly understood in children with specific language impairment (SLI) and children with autism spectrum disorder (ASD). This study examined underlying mechanisms of word learning, specifically, statistical learning and fast-mapping, in school-aged children with typical and atypical development. Statistical learning was assessed through a word segmentation task and fast-mapping was examined in an object-label association task. We also examined children's ability to map meaning onto newly segmented words in a third task that combined exposure to an artificial language and a fast-mapping task. Children with SLI had poorer performance on the word segmentation and fast-mapping tasks relative to the typically developing and ASD groups, who did not differ from one another. However, when children with SLI were exposed to an artificial language with phonemes used in the subsequent fast-mapping task, they successfully learned more words than in the isolated fast-mapping task. There was some evidence that word segmentation abilities are associated with word learning in school-aged children with typical development and ASD, but not SLI. Follow-up analyses also examined performance in children with ASD who did and did not have a language impairment. Children with ASD with language impairment evidenced intact statistical learning abilities, but subtle weaknesses in fast-mapping abilities. As the Procedural Deficit Hypothesis (PDH) predicts, children with SLI have impairments in statistical learning. However, children with SLI also have impairments in fast-mapping. Nonetheless, they are able to take advantage of additional phonological exposure to boost subsequent word-learning performance. In contrast to the PDH, children with ASD appear to have intact statistical learning, regardless of language status; however, fast-mapping abilities differ according to broader language skills. © 2017 Association for Child and Adolescent Mental Health.

  12. Coupled dictionary learning for joint MR image restoration and segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xuesong; Fan, Yong

    2018-03-01

    To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.

  13. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    PubMed

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Geospatial Characterization of Fluvial Wood Arrangement in a Semi-confined Alluvial River

    NASA Astrophysics Data System (ADS)

    Martin, D. J.; Harden, C. P.; Pavlowsky, R. T.

    2014-12-01

    Large woody debris (LWD) has become universally recognized as an integral component of fluvial systems, and as a result, has become increasingly common as a river restoration tool. However, "natural" processes of wood recruitment and the subsequent arrangement of LWD within the river network are poorly understood. This research used a suite of spatial statistics to investigate longitudinal arrangement patterns of LWD in a low-gradient, Midwestern river. First, a large-scale GPS inventory of LWD, performed on the Big River in the eastern Missouri Ozarks, resulted in over 4,000 logged positions of LWD along seven river segments that covered nearly 100 km of the 237 km river system. A global Moran's I analysis indicates that LWD density is spatially autocorrelated and displays a clustering tendency within all seven river segments (P-value range = 0.000 to 0.054). A local Moran's I analysis identified specific locations along the segments where clustering occurs and revealed that, on average, clusters of LWD density (high or low) spanned 400 m. Spectral analyses revealed that, in some segments, LWD density is spatially periodic. Two segments displayed strong periodicity, while the remaining segments displayed varying degrees of noisiness. Periodicity showed a positive association with gravel bar spacing and meander wavelength, although there were insufficient data to statistically confirm the relationship. A wavelet analysis was then performed to investigate periodicity relative to location along the segment. The wavelet analysis identified significant (α = 0.05) periodicity at discrete locations along each of the segments. Those reaches yielding strong periodicity showed stronger relationships between LWD density and the geomorphic/riparian independent variables tested. Analyses consistently identified valley width and sinuosity as being associated with LWD density. The results of these analyses contribute a new perspective on the longitudinal distribution of LWD in a river system, which should help identify physical and/or riparian control mechanisms of LWD arrangement and support the development of models of LWD arrangement. Additionally, the spatial statistical tools presented here have shown to be valuable for identifying longitudinal patterns in river system components.

  15. Segmenting Dynamic Human Action via Statistical Structure

    ERIC Educational Resources Information Center

    Baldwin, Dare; Andersson, Annika; Saffran, Jenny; Meyer, Meredith

    2008-01-01

    Human social, cognitive, and linguistic functioning depends on skills for rapidly processing action. Identifying distinct acts within the dynamic motion flow is one basic component of action processing; for example, skill at segmenting action is foundational to action categorization, verb learning, and comprehension of novel action sequences. Yet…

  16. Renewal models and coseismic stress transfer in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Falcone, Giuseppe; Karakostas, Vassilis; Murru, Maura; Papadimitriou, Eleftheria; Rhoades, David

    2013-07-01

    model interevent times and Coulomb static stress transfer on the rupture segments along the Corinth Gulf extension zone, a region with a wealth of observations on strong-earthquake recurrence behavior. From the available information on past seismic activity, we have identified eight segments without significant overlapping that are aligned along the southern boundary of the Corinth rift. We aim to test if strong earthquakes on these segments are characterized by some kind of time-predictable behavior, rather than by complete randomness. The rationale for time-predictable behavior is based on the characteristic earthquake hypothesis, the necessary ingredients of which are a known faulting geometry and slip rate. The tectonic loading rate is characterized by slip of 6 mm/yr on the westernmost fault segment, diminishing to 4 mm/yr on the easternmost segment, based on the most reliable geodetic data. In this study, we employ statistical and physical modeling to account for stress transfer among these fault segments. The statistical modeling is based on the definition of a probability density distribution of the interevent times for each segment. Both the Brownian Passage-Time (BPT) and Weibull distributions are tested. The time-dependent hazard rate thus obtained is then modified by the inclusion of a permanent physical effect due to the Coulomb static stress change caused by failure of neighboring faults since the latest characteristic earthquake on the fault of interest. The validity of the renewal model is assessed retrospectively, using the data of the last 300 years, by comparison with a plain time-independent Poisson model, by means of statistical tools including the Relative Operating Characteristic diagram, the R-score, the probability gain and the log-likelihood ratio. We treat the uncertainties in the parameters of each examined fault source, such as linear dimensions, depth of the fault center, focal mechanism, recurrence time, coseismic slip, and aperiodicity of the statistical distribution, by a Monte Carlo technique. The Monte Carlo samples for all these parameters are drawn from a uniform distribution within their uncertainty limits. We find that the BPT and the Weibull renewal models yield comparable results, and both of them perform significantly better than the Poisson hypothesis. No clear performance enhancement is achieved by the introduction of the Coulomb static stress change into the renewal model.

  17. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  18. Computer assisted detection and analysis of tall cell variant papillary thyroid carcinoma in histological images

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Baloch, Zubair; Kim, Caroline

    2015-03-01

    The number of new cases of thyroid cancer are dramatically increasing as incidences of this cancer have more than doubled since the early 1970s. Tall cell variant (TCV-PTC) papillary thyroid carcinoma is one type of thyroid cancer that is more aggressive and usually associated with higher local recurrence and distant metastasis. This variant can be identified through visual characteristics of cells in histological images. Thus, we created a fully automatic algorithm that is able to segment cells using a multi-stage approach. Our method learns the statistical characteristics of nuclei and cells during the segmentation process and utilizes this information for a more accurate result. Furthermore, we are able to analyze the detected regions and extract characteristic cell data that can be used to assist in clinical diagnosis.

  19. Probabilistic atlas and geometric variability estimation to drive tissue segmentation.

    PubMed

    Xu, Hao; Thirion, Bertrand; Allassonnière, Stéphanie

    2014-09-10

    Computerized anatomical atlases play an important role in medical image analysis. While an atlas usually refers to a standard or mean image also called template, which presumably represents well a given population, it is not enough to characterize the observed population in detail. A template image should be learned jointly with the geometric variability of the shapes represented in the observations. These two quantities will in the sequel form the atlas of the corresponding population. The geometric variability is modeled as deformations of the template image so that it fits the observations. In this paper, we provide a detailed analysis of a new generative statistical model based on dense deformable templates that represents several tissue types observed in medical images. Our atlas contains both an estimation of probability maps of each tissue (called class) and the deformation metric. We use a stochastic algorithm for the estimation of the probabilistic atlas given a dataset. This atlas is then used for atlas-based segmentation method to segment the new images. Experiments are shown on brain T1 MRI datasets. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Location Sensitive Deep Convolutional Neural Networks for Segmentation of White Matter Hyperintensities.

    PubMed

    Ghafoorian, Mohsen; Karssemeijer, Nico; Heskes, Tom; van Uden, Inge W M; Sanchez, Clara I; Litjens, Geert; de Leeuw, Frank-Erik; van Ginneken, Bram; Marchiori, Elena; Platel, Bram

    2017-07-11

    The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).

Top