Automated segmentation and dose-volume analysis with DICOMautomaton
NASA Astrophysics Data System (ADS)
Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.
2014-03-01
Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R
To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry
2015-11-01
In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.
Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry
2015-11-21
In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.
3D geometric split-merge segmentation of brain MRI datasets.
Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis
2014-05-01
In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun
2008-03-01
Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.
Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.
Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen
2008-02-01
A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.
Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin
2012-01-01
In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.
Random forest classification of large volume structures for visuo-haptic rendering in CT images
NASA Astrophysics Data System (ADS)
Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz
2016-03-01
For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.
Automatic segmentation of tumor-laden lung volumes from the LIDC database
NASA Astrophysics Data System (ADS)
O'Dell, Walter G.
2012-03-01
The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.
Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L
2018-06-07
We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.
Automatic partitioning of head CTA for enabling segmentation
NASA Astrophysics Data System (ADS)
Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin
2004-05-01
Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.
Alp, Murat; Cucinotta, Francis A.
2017-01-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (>100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. PMID:28554507
NASA Astrophysics Data System (ADS)
Alp, Murat; Cucinotta, Francis A.
2017-05-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch.
Alp, Murat; Cucinotta, Francis A
2017-05-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100µm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3 He and 12 C particles at energies corresponding to a distance of 1cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Karasawa, Kenichi; Oda, Masahiro; Hayashi, Yuichiro; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Rueckert, Daniel; Mori, Kensaku
2015-03-01
Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.
Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less
AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.
Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J
2015-04-01
A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.
Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.
Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku
2017-07-01
Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.
Automated 3D renal segmentation based on image partitioning
NASA Astrophysics Data System (ADS)
Yeghiazaryan, Varduhi; Voiculescu, Irina D.
2016-03-01
Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.
Huo, Yuankai; Xu, Zhoubing; Bao, Shunxing; Bermudez, Camilo; Plassard, Andrew J.; Liu, Jiaqi; Yao, Yuang; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2018-01-01
Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.
NASA Astrophysics Data System (ADS)
He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan
2017-07-01
While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.
Besemer, Abigail E; Titz, Benjamin; Grudzinski, Joseph J; Weichert, Jamey P; Kuo, John S; Robins, H Ian; Hall, Lance T; Bednarz, Bryan P
2017-07-06
Variations in tumor volume segmentation methods in targeted radionuclide therapy (TRT) may lead to dosimetric uncertainties. This work investigates the impact of PET and MRI threshold-based tumor segmentation on TRT dosimetry in patients with primary and metastatic brain tumors. In this study, PET/CT images of five brain cancer patients were acquired at 6, 24, and 48 h post-injection of 124 I-CLR1404. The tumor volume was segmented using two standardized uptake value (SUV) threshold levels, two tumor-to-background ratio (TBR) threshold levels, and a T1 Gadolinium-enhanced MRI threshold. The dice similarity coefficient (DSC), jaccard similarity coefficient (JSC), and overlap volume (OV) metrics were calculated to compare differences in the MRI and PET contours. The therapeutic 131 I-CLR1404 voxel-level dose distribution was calculated from the 124 I-CLR1404 activity distribution using RAPID, a Geant4 Monte Carlo internal dosimetry platform. The TBR, SUV, and MRI tumor volumes ranged from 2.3-63.9 cc, 0.1-34.7 cc, and 0.4-11.8 cc, respectively. The average ± standard deviation (range) was 0.19 ± 0.13 (0.01-0.51), 0.30 ± 0.17 (0.03-0.67), and 0.75 ± 0.29 (0.05-1.00) for the JSC, DSC, and OV, respectively. The DSC and JSC values were small and the OV values were large for both the MRI-SUV and MRI-TBR combinations because the regions of PET uptake were generally larger than the MRI enhancement. Notable differences in the tumor dose volume histograms were observed for each patient. The mean (standard deviation) 131 I-CLR1404 tumor doses ranged from 0.28-1.75 Gy GBq -1 (0.07-0.37 Gy GBq -1 ). The ratio of maximum-to-minimum mean doses for each patient ranged from 1.4-2.0. The tumor volume and the interpretation of the tumor dose is highly sensitive to the imaging modality, PET enhancement metric, and threshold level used for tumor volume segmentation. The large variations in tumor doses clearly demonstrate the need for standard protocols for multimodality tumor segmentation in TRT dosimetry.
NASA Astrophysics Data System (ADS)
Besemer, Abigail E.; Titz, Benjamin; Grudzinski, Joseph J.; Weichert, Jamey P.; Kuo, John S.; Robins, H. Ian; Hall, Lance T.; Bednarz, Bryan P.
2017-08-01
Variations in tumor volume segmentation methods in targeted radionuclide therapy (TRT) may lead to dosimetric uncertainties. This work investigates the impact of PET and MRI threshold-based tumor segmentation on TRT dosimetry in patients with primary and metastatic brain tumors. In this study, PET/CT images of five brain cancer patients were acquired at 6, 24, and 48 h post-injection of 124I-CLR1404. The tumor volume was segmented using two standardized uptake value (SUV) threshold levels, two tumor-to-background ratio (TBR) threshold levels, and a T1 Gadolinium-enhanced MRI threshold. The dice similarity coefficient (DSC), jaccard similarity coefficient (JSC), and overlap volume (OV) metrics were calculated to compare differences in the MRI and PET contours. The therapeutic 131I-CLR1404 voxel-level dose distribution was calculated from the 124I-CLR1404 activity distribution using RAPID, a Geant4 Monte Carlo internal dosimetry platform. The TBR, SUV, and MRI tumor volumes ranged from 2.3-63.9 cc, 0.1-34.7 cc, and 0.4-11.8 cc, respectively. The average ± standard deviation (range) was 0.19 ± 0.13 (0.01-0.51), 0.30 ± 0.17 (0.03-0.67), and 0.75 ± 0.29 (0.05-1.00) for the JSC, DSC, and OV, respectively. The DSC and JSC values were small and the OV values were large for both the MRI-SUV and MRI-TBR combinations because the regions of PET uptake were generally larger than the MRI enhancement. Notable differences in the tumor dose volume histograms were observed for each patient. The mean (standard deviation) 131I-CLR1404 tumor doses ranged from 0.28-1.75 Gy GBq-1 (0.07-0.37 Gy GBq-1). The ratio of maximum-to-minimum mean doses for each patient ranged from 1.4-2.0. The tumor volume and the interpretation of the tumor dose is highly sensitive to the imaging modality, PET enhancement metric, and threshold level used for tumor volume segmentation. The large variations in tumor doses clearly demonstrate the need for standard protocols for multimodality tumor segmentation in TRT dosimetry.
Regional growth and atlasing of the developing human brain
Makropoulos, Antonios; Aljabar, Paul; Wright, Robert; Hüning, Britta; Merchant, Nazakat; Arichi, Tomoki; Tusor, Nora; Hajnal, Joseph V.; Edwards, A. David; Counsell, Serena J.; Rueckert, Daniel
2016-01-01
Detailed morphometric analysis of the neonatal brain is required to characterise brain development and define neuroimaging biomarkers related to impaired brain growth. Accurate automatic segmentation of neonatal brain MRI is a prerequisite to analyse large datasets. We have previously presented an accurate and robust automatic segmentation technique for parcellating the neonatal brain into multiple cortical and subcortical regions. In this study, we further extend our segmentation method to detect cortical sulci and provide a detailed delineation of the cortical ribbon. These detailed segmentations are used to build a 4-dimensional spatio-temporal structural atlas of the brain for 82 cortical and subcortical structures throughout this developmental period. We employ the algorithm to segment an extensive database of 420 MR images of the developing brain, from 27 to 45 weeks post-menstrual age at imaging. Regional volumetric and cortical surface measurements are derived and used to investigate brain growth and development during this critical period and to assess the impact of immaturity at birth. Whole brain volume, the absolute volume of all structures studied, cortical curvature and cortical surface area increased with increasing age at scan. Relative volumes of cortical grey matter, cerebellum and cerebrospinal fluid increased with age at scan, while relative volumes of white matter, ventricles, brainstem and basal ganglia and thalami decreased. Preterm infants at term had smaller whole brain volumes, reduced regional white matter and cortical and subcortical grey matter volumes, and reduced cortical surface area compared with term born controls, while ventricular volume was greater in the preterm group. Increasing prematurity at birth was associated with a reduction in total and regional white matter, cortical and subcortical grey matter volume, an increase in ventricular volume, and reduced cortical surface area. PMID:26499811
Regional growth and atlasing of the developing human brain.
Makropoulos, Antonios; Aljabar, Paul; Wright, Robert; Hüning, Britta; Merchant, Nazakat; Arichi, Tomoki; Tusor, Nora; Hajnal, Joseph V; Edwards, A David; Counsell, Serena J; Rueckert, Daniel
2016-01-15
Detailed morphometric analysis of the neonatal brain is required to characterise brain development and define neuroimaging biomarkers related to impaired brain growth. Accurate automatic segmentation of neonatal brain MRI is a prerequisite to analyse large datasets. We have previously presented an accurate and robust automatic segmentation technique for parcellating the neonatal brain into multiple cortical and subcortical regions. In this study, we further extend our segmentation method to detect cortical sulci and provide a detailed delineation of the cortical ribbon. These detailed segmentations are used to build a 4-dimensional spatio-temporal structural atlas of the brain for 82 cortical and subcortical structures throughout this developmental period. We employ the algorithm to segment an extensive database of 420 MR images of the developing brain, from 27 to 45weeks post-menstrual age at imaging. Regional volumetric and cortical surface measurements are derived and used to investigate brain growth and development during this critical period and to assess the impact of immaturity at birth. Whole brain volume, the absolute volume of all structures studied, cortical curvature and cortical surface area increased with increasing age at scan. Relative volumes of cortical grey matter, cerebellum and cerebrospinal fluid increased with age at scan, while relative volumes of white matter, ventricles, brainstem and basal ganglia and thalami decreased. Preterm infants at term had smaller whole brain volumes, reduced regional white matter and cortical and subcortical grey matter volumes, and reduced cortical surface area compared with term born controls, while ventricular volume was greater in the preterm group. Increasing prematurity at birth was associated with a reduction in total and regional white matter, cortical and subcortical grey matter volume, an increase in ventricular volume, and reduced cortical surface area. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
A Scalable Framework For Segmenting Magnetic Resonance Images
Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar
2009-01-01
A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893
Knotting probability of self-avoiding polygons under a topological constraint.
Uehara, Erica; Deguchi, Tetsuo
2017-09-07
We define the knotting probability of a knot K by the probability for a random polygon or self-avoiding polygon (SAP) of N segments having the knot type K. We show fundamental and generic properties of the knotting probability particularly its dependence on the excluded volume. We investigate them for the SAP consisting of hard cylindrical segments of unit length and radius r ex . For various prime and composite knots, we numerically show that a compact formula describes the knotting probabilities for the cylindrical SAP as a function of segment number N and radius r ex . It connects the small-N to the large-N behavior and even to lattice knots in the case of large values of radius. As the excluded volume increases, the maximum of the knotting probability decreases for prime knots except for the trefoil knot. If it is large, the trefoil knot and its descendants are dominant among the nontrivial knots in the SAP. From the factorization property of the knotting probability, we derive a sum rule among the estimates of a fitting parameter for all prime knots, which suggests the local knot picture and the dominance of the trefoil knot in the case of large excluded volumes. Here we remark that the cylindrical SAP gives a model of circular DNA which is negatively charged and semiflexible, where radius r ex corresponds to the screening length.
Knotting probability of self-avoiding polygons under a topological constraint
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2017-09-01
We define the knotting probability of a knot K by the probability for a random polygon or self-avoiding polygon (SAP) of N segments having the knot type K. We show fundamental and generic properties of the knotting probability particularly its dependence on the excluded volume. We investigate them for the SAP consisting of hard cylindrical segments of unit length and radius rex. For various prime and composite knots, we numerically show that a compact formula describes the knotting probabilities for the cylindrical SAP as a function of segment number N and radius rex. It connects the small-N to the large-N behavior and even to lattice knots in the case of large values of radius. As the excluded volume increases, the maximum of the knotting probability decreases for prime knots except for the trefoil knot. If it is large, the trefoil knot and its descendants are dominant among the nontrivial knots in the SAP. From the factorization property of the knotting probability, we derive a sum rule among the estimates of a fitting parameter for all prime knots, which suggests the local knot picture and the dominance of the trefoil knot in the case of large excluded volumes. Here we remark that the cylindrical SAP gives a model of circular DNA which is negatively charged and semiflexible, where radius rex corresponds to the screening length.
SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verburg, E; Waard, SN de; Veldhuis, WB
Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selectedmore » participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.« less
Johnson, Eileanoir B.; Gregory, Sarah; Johnson, Hans J.; Durr, Alexandra; Leavitt, Blair R.; Roos, Raymund A.; Rees, Geraint; Tabrizi, Sarah J.; Scahill, Rachael I.
2017-01-01
The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington’s disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software. PMID:29066997
Johnson, Eileanoir B; Gregory, Sarah; Johnson, Hans J; Durr, Alexandra; Leavitt, Blair R; Roos, Raymund A; Rees, Geraint; Tabrizi, Sarah J; Scahill, Rachael I
2017-01-01
The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington's disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software.
Morales, Juan; Alonso-Nanclares, Lidia; Rodríguez, José-Rodrigo; DeFelipe, Javier; Rodríguez, Ángel; Merchán-Pérez, Ángel
2011-01-01
The synapses in the cerebral cortex can be classified into two main types, Gray's type I and type II, which correspond to asymmetric (mostly glutamatergic excitatory) and symmetric (inhibitory GABAergic) synapses, respectively. Hence, the quantification and identification of their different types and the proportions in which they are found, is extraordinarily important in terms of brain function. The ideal approach to calculate the number of synapses per unit volume is to analyze 3D samples reconstructed from serial sections. However, obtaining serial sections by transmission electron microscopy is an extremely time consuming and technically demanding task. Using focused ion beam/scanning electron microscope microscopy, we recently showed that virtually all synapses can be accurately identified as asymmetric or symmetric synapses when they are visualized, reconstructed, and quantified from large 3D tissue samples obtained in an automated manner. Nevertheless, the analysis, segmentation, and quantification of synapses is still a labor intensive procedure. Thus, novel solutions are currently necessary to deal with the large volume of data that is being generated by automated 3D electron microscopy. Accordingly, we have developed ESPINA, a software tool that performs the automated segmentation and counting of synapses in a reconstructed 3D volume of the cerebral cortex, and that greatly facilitates and accelerates these processes. PMID:21633491
Daisne, Jean-François; Blumhofer, Andreas
2013-06-26
Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for "manual to automatic" and "manual to corrected" volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert.
NASA Astrophysics Data System (ADS)
Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-01
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-07
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku
2017-02-01
Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.
Robust hepatic vessel segmentation using multi deep convolution network
NASA Astrophysics Data System (ADS)
Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei
2017-03-01
Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.
SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.
Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga
2013-01-01
High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaborszky, J.; Venkatasubramanian, V.
1995-10-01
Taxonomy Theory is the first precise comprehensive theory for large power system dynamics modeled in any detail. The motivation for this project is to show that it can be used, practically, for analyzing a disturbance that actually occurred on a large system, which affected a sizable portion of the Midwest with supercritical Hopf type oscillations. This event is well documented and studied. The report first summarizes Taxonomy Theory with an engineering flavor. Then various computational approaches are sighted and analyzed for desirability to use with Taxonomy Theory. Then working equations are developed for computing a segment of the feasibility boundarymore » that bounds the region of (operating) parameters throughout which the operating point can be moved without losing stability. Then experimental software incorporating large EPRI software packages PSAPAC is developed. After a summary of the events during the subject disturbance, numerous large scale computations, up to 7600 buses, are reported. These results are reduced into graphical and tabular forms, which then are analyzed and discussed. The report is divided into two volumes. This volume illustrates the use of the Taxonomy Theory for computing the feasibility boundary and presents evidence that the event indeed led to a Hopf type oscillation on the system. Furthermore it proves that the Feasibility Theory can indeed be used for practical computation work with very large systems. Volume 2, a separate volume, will show that the disturbance has led to a supercritical (that is stable oscillation) Hopf bifurcation.« less
Amann, Michael; Andělová, Michaela; Pfister, Armanda; Mueller-Lenke, Nicole; Traud, Stefan; Reinhardt, Julia; Magon, Stefano; Bendfeldt, Kerstin; Kappos, Ludwig; Radue, Ernst-Wilhelm; Stippich, Christoph; Sprenger, Till
2015-01-01
Brain atrophy has been identified as an important contributing factor to the development of disability in multiple sclerosis (MS). In this respect, more and more interest is focussing on the role of deep grey matter (DGM) areas. Novel data analysis pipelines are available for the automatic segmentation of DGM using three-dimensional (3D) MRI data. However, in clinical trials, often no such high-resolution data are acquired and hence no conclusions regarding the impact of new treatments on DGM atrophy were possible so far. In this work, we used FMRIB's Integrated Registration and Segmentation Tool (FIRST) to evaluate the possibility of segmenting DGM structures using standard two-dimensional (2D) T1-weighted MRI. In a cohort of 70 MS patients, both 2D and 3D T1-weighted data were acquired. The thalamus, putamen, pallidum, nucleus accumbens, and caudate nucleus were bilaterally segmented using FIRST. Volumes were calculated for each structure and for the sum of basal ganglia (BG) as well as for the total DGM. The accuracy and reliability of the 2D data segmentation were compared with the respective results of 3D segmentations using volume difference, volume overlap and intra-class correlation coefficients (ICCs). The mean differences for the individual substructures were between 1.3% (putamen) and -25.2% (nucleus accumbens). The respective values for the BG were -2.7% and for DGM 1.3%. Mean volume overlap was between 89.1% (thalamus) and 61.5% (nucleus accumbens); BG: 84.1%; DGM: 86.3%. Regarding ICC, all structures showed good agreement with the exception of the nucleus accumbens. The results of the segmentation were additionally validated through expert manual delineation of the caudate nucleus and putamen in a subset of the 3D data. In conclusion, we demonstrate that subcortical segmentation of 2D data are feasible using FIRST. The larger subcortical GM structures can be segmented with high consistency. This forms the basis for the application of FIRST in large 2D MRI data sets of clinical trials in order to determine the impact of therapeutic interventions on DGM atrophy in MS.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
Sjöberg, Carl; Lundmark, Martin; Granberg, Christoffer; Johansson, Silvia; Ahnesjö, Anders; Montelius, Anders
2013-10-03
Semi-automated segmentation using deformable registration of selected atlas cases consisting of expert segmented patient images has been proposed to facilitate the delineation of lymph node regions for three-dimensional conformal and intensity-modulated radiotherapy planning of head and neck and prostate tumours. Our aim is to investigate if fusion of multiple atlases will lead to clinical workload reductions and more accurate segmentation proposals compared to the use of a single atlas segmentation, due to a more complete representation of the anatomical variations. Atlases for lymph node regions were constructed using 11 head and neck patients and 15 prostate patients based on published recommendations for segmentations. A commercial registration software (Velocity AI) was used to create individual segmentations through deformable registration. Ten head and neck patients, and ten prostate patients, all different from the atlas patients, were randomly chosen for the study from retrospective data. Each patient was first delineated three times, (a) manually by a radiation oncologist, (b) automatically using a single atlas segmentation proposal from a chosen atlas and (c) automatically by fusing the atlas proposals from all cases in the database using the probabilistic weighting fusion algorithm. In a subsequent step a radiation oncologist corrected the segmentation proposals achieved from step (b) and (c) without using the result from method (a) as reference. The time spent for editing the segmentations was recorded separately for each method and for each individual structure. Finally, the Dice Similarity Coefficient and the volume of the structures were used to evaluate the similarity between the structures delineated with the different methods. For the single atlas method, the time reduction compared to manual segmentation was 29% and 23% for head and neck and pelvis lymph nodes, respectively, while editing the fused atlas proposal resulted in time reductions of 49% and 34%. The average volume of the fused atlas proposals was only 74% of the manual segmentation for the head and neck cases and 82% for the prostate cases due to a blurring effect from the fusion process. After editing of the proposals the resulting volume differences were no longer statistically significant, although a slight influence by the proposals could be noticed since the average edited volume was still slightly smaller than the manual segmentation, 9% and 5%, respectively. Segmentation based on fusion of multiple atlases reduces the time needed for delineation of lymph node regions compared to the use of a single atlas segmentation. Even though the time saving is large, the quality of the segmentation is maintained compared to manual segmentation.
Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles
2017-05-26
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.
Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K
2017-10-01
Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.
Yang, C; Paulson, E; Li, X
2012-06-01
To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.
Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian
2018-03-26
In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.
Direct volume estimation without segmentation
NASA Astrophysics Data System (ADS)
Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.
2015-03-01
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
System for detecting operating errors in a variable valve timing engine using pressure sensors
Wiles, Matthew A.; Marriot, Craig D
2013-07-02
A method and control module includes a pressure sensor data comparison module that compares measured pressure volume signal segments to ideal pressure volume segments. A valve actuation hardware remedy module performs a hardware remedy in response to comparing the measured pressure volume signal segments to the ideal pressure volume segments when a valve actuation hardware failure is detected.
Ng, Julian; Browning, Alyssa; Lechner, Lorenz; Terada, Masako; Howard, Gillian; Jefferis, Gregory S. X. E.
2016-01-01
Large dimension, high-resolution imaging is important for neural circuit visualisation as neurons have both long- and short-range patterns: from axons and dendrites to the numerous synapses at terminal endings. Electron Microscopy (EM) is the favoured approach for synaptic resolution imaging but how such structures can be segmented from high-density images within large volume datasets remains challenging. Fluorescent probes are widely used to localise synapses, identify cell-types and in tracing studies. The equivalent EM approach would benefit visualising such labelled structures from within sub-cellular, cellular, tissue and neuroanatomical contexts. Here we developed genetically-encoded, electron-dense markers using miniSOG. We demonstrate their ability in 1) labelling cellular sub-compartments of genetically-targeted neurons, 2) generating contrast under different EM modalities, and 3) segmenting labelled structures from EM volumes using computer-assisted strategies. We also tested non-destructive X-ray imaging on whole Drosophila brains to evaluate contrast staining. This enabled us to target specific regions for EM volume acquisition. PMID:27958322
Siozopoulos, Achilleas; Thomaidis, Vasilios; Prassopoulos, Panos; Fiska, Aliki
2018-02-01
Literature includes a number of studies using structural MRI (sMRI) to determine the volume of the amygdala, which is modified in various pathologic conditions. The reported values vary widely mainly because of different anatomical approaches to the complex. This study aims at estimating of the normal amygdala volume from sMRI scans using a recent anatomical definition described in a study based on post-mortem material. The amygdala volume has been calculated in 106 healthy subjects, using sMRI and anatomical-based segmentation. The resulting volumes have been analyzed for differences related to hemisphere, sex, and age. The mean amygdalar volume was estimated at 1.42 cm 3 . The mean right amygdala volume has been found larger than the left, but the difference for the raw values was within the limits of the method error. No intersexual differences or age-related alterations have been observed. The study provides a method for determining the boundaries of the amygdala in sMRI scans based on recent anatomical considerations and an estimation of the mean normal amygdala volume from a quite large number of scans for future use in comparative studies.
Thomas E. Lisle
1996-01-01
Abstract - Jacoby Creek (bed width =12 m; bankfull discharge = 32.6 m 3 /s) contains stationary gravel bars that have forms and positions controlled by numerous large streamside obstructions (bedrock outcrops, large woody debris, and rooted bank projections) and bedrock bends. Bank-projection width and bar volume measured in 104 channel segments 1 bed-width long are...
Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna
2018-01-01
Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.
Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.
Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D
2007-06-21
Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.
NASA Astrophysics Data System (ADS)
Xu, Robert S.; Michailovich, Oleg V.; Solovey, Igor; Salama, Magdy M. A.
2010-03-01
Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation methods are based on the imagery data of the prostate. The necessity to process large volumes of such data requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region. In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is presented. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. In particular, the tracking of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force, which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David
2014-05-15
Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less
Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.
2014-01-01
Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380
[Definition of nodal volumes in breast cancer treatment and segmentation guidelines].
Kirova, Y M; Castro Pena, P; Dendale, R; Campana, F; Bollet, M A; Fournier-Bidoz, N; Fourquet, A
2009-06-01
To assist in the determination of breast and nodal volumes in the setting of radiotherapy for breast cancer and establish segmentation guidelines. Materials and methods. Contrast metarial enhanced CT examinations were obtained in the treatment position in 25 patients to clearly define the target volumes. The clinical target volume (CTV) including the breast, internal mammary nodes, supraclavicular and subclavicular regions and axxilary region were segmented along with the brachial plexus and interpectoral nodes. The following critical organs were also segmented: heart, lungs, contralateral breast, thyroid, esophagus and humeral head. A correlation between clinical and imaging findings and meeting between radiation oncologists and breast specialists resulted in a better definition of irradiation volumes for breast and nodes with establishement of segmentation guidelines and creation of an anatomical atlas. A practical approach, based on anatomical criteria, is proposed to assist in the segmentation of breast and node volumes in the setting of breast cancer treatment along with a definition of irradiation volumes.
Zhu, F; Kuhlmann, M K; Kaysen, G A; Sarkar, S; Kaitwatcharachai, C; Khilnani, R; Stevens, L; Leonard, E F; Wang, J; Heymsfield, S; Levin, N W
2006-02-01
Discrepancies in body fluid estimates between segmental bioimpedance spectroscopy (SBIS) and gold-standard methods may be due to the use of a uniform value of tissue resistivity to compute extracellular fluid volume (ECV) and intracellular fluid volume (ICV). Discrepancies may also arise from the exclusion of fluid volumes of hands, feet, neck, and head from measurements due to electrode positions. The aim of this study was to define the specific resistivity of various body segments and to use those values for computation of ECV and ICV along with a correction for unmeasured fluid volumes. Twenty-nine maintenance hemodialysis patients (16 men) underwent body composition analysis including whole body MRI, whole body potassium (40K) content, deuterium, and sodium bromide dilution, and segmental and wrist-to-ankle bioimpedance spectroscopy, all performed on the same day before a hemodialysis. Segment-specific resistivity was determined from segmental fat-free mass (FFM; by MRI), hydration status of FFM (by deuterium and sodium bromide), tissue resistance (by SBIS), and segment length. Segmental FFM was higher and extracellular hydration of FFM was lower in men compared with women. Segment-specific resistivity values for arm, trunk, and leg all differed from the uniform resistivity used in traditional SBIS algorithms. Estimates for whole body ECV, ICV, and total body water from SBIS using segmental instead of uniform resistivity values and after adjustment for unmeasured fluid volumes of the body did not differ significantly from gold-standard measures. The uniform tissue resistivity values used in traditional SBIS algorithms result in underestimation of ECV, ICV, and total body water. Use of segmental resistivity values combined with adjustment for body volumes that are neglected by traditional SBIS technique significantly improves estimations of body fluid volume in hemodialysis patients.
Deeley, M A; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, E; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Yei, F; Koyama, T; Ding, G X; Dawant, B M
2011-01-01
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation (STAPLE) algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8–0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4–0.5. Similarly low DSC have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (−4.3, +5.4) mm for the automatic system to (−3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms. PMID:21725140
NASA Astrophysics Data System (ADS)
Deeley, M. A.; Chen, A.; Datteri, R.; Noble, J. H.; Cmelak, A. J.; Donnelly, E. F.; Malcolm, A. W.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Yei, F.; Koyama, T.; Ding, G. X.; Dawant, B. M.
2011-07-01
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.
Effects of voxelization on dose volume histogram accuracy
NASA Astrophysics Data System (ADS)
Sunderland, Kyle; Pinter, Csaba; Lasso, Andras; Fichtinger, Gabor
2016-03-01
PURPOSE: In radiotherapy treatment planning systems, structures of interest such as targets and organs at risk are stored as 2D contours on evenly spaced planes. In order to be used in various algorithms, contours must be converted into binary labelmap volumes using voxelization. The voxelization process results in lost information, which has little effect on the volume of large structures, but has significant impact on small structures, which contain few voxels. Volume differences for segmented structures affects metrics such as dose volume histograms (DVH), which are used for treatment planning. Our goal is to evaluate the impact of voxelization on segmented structures, as well as how factors like voxel size affects metrics, such as DVH. METHODS: We create a series of implicit functions, which represent simulated structures. These structures are sampled at varying resolutions, and compared to labelmaps with high sub-millimeter resolutions. We generate DVH and evaluate voxelization error for the same structures at different resolutions by calculating the agreement acceptance percentage between the DVH. RESULTS: We implemented tools for analysis as modules in the SlicerRT toolkit based on the 3D Slicer platform. We found that there were large DVH variation from the baseline for small structures or for structures located in regions with a high dose gradient, potentially leading to the creation of suboptimal treatment plans. CONCLUSION: This work demonstrates that labelmap and dose volume voxel size is an important factor in DVH accuracy, which must be accounted for in order to ensure the development of accurate treatment plans.
Bioimpedance Measurement of Segmental Fluid Volumes and Hemodynamics
NASA Technical Reports Server (NTRS)
Montgomery, Leslie D.; Wu, Yi-Chang; Ku, Yu-Tsuan E.; Gerth, Wayne A.; DeVincenzi, D. (Technical Monitor)
2000-01-01
Bioimpedance has become a useful tool to measure changes in body fluid compartment volumes. An Electrical Impedance Spectroscopic (EIS) system is described that extends the capabilities of conventional fixed frequency impedance plethysmographic (IPG) methods to allow examination of the redistribution of fluids between the intracellular and extracellular compartments of body segments. The combination of EIS and IPG techniques was evaluated in the human calf, thigh, and torso segments of eight healthy men during 90 minutes of six degree head-down tilt (HDT). After 90 minutes HDT the calf and thigh segments significantly (P < 0.05) lost conductive volume (eight and four percent, respectively) while the torso significantly (P < 0.05) gained volume (approximately three percent). Hemodynamic responses calculated from pulsatile IPG data also showed a segmental pattern consistent with vascular fluid loss from the lower extremities and vascular engorgement in the torso. Lumped-parameter equivalent circuit analyses of EIS data for the calf and thigh indicated that the overall volume decreases in these segments arose from reduced extracellular volume that was not completely balanced by increased intracellular volume. The combined use of IPG and EIS techniques enables noninvasive tracking of multi-segment volumetric and hemodynamic responses to environmental and physiological stresses.
Kline, Timothy L; Korfiatis, Panagiotis; Edwards, Marie E; Blais, Jaime D; Czerwiec, Frank S; Harris, Peter C; King, Bernard F; Torres, Vicente E; Erickson, Bradley J
2017-08-01
Deep learning techniques are being rapidly applied to medical imaging tasks-from organ and lesion segmentation to tissue and tumor classification. These techniques are becoming the leading algorithmic approaches to solve inherently difficult image processing tasks. Currently, the most critical requirement for successful implementation lies in the need for relatively large datasets that can be used for training the deep learning networks. Based on our initial studies of MR imaging examinations of the kidneys of patients affected by polycystic kidney disease (PKD), we have generated a unique database of imaging data and corresponding reference standard segmentations of polycystic kidneys. In the study of PKD, segmentation of the kidneys is needed in order to measure total kidney volume (TKV). Automated methods to segment the kidneys and measure TKV are needed to increase measurement throughput and alleviate the inherent variability of human-derived measurements. We hypothesize that deep learning techniques can be leveraged to perform fast, accurate, reproducible, and fully automated segmentation of polycystic kidneys. Here, we describe a fully automated approach for segmenting PKD kidneys within MR images that simulates a multi-observer approach in order to create an accurate and robust method for the task of segmentation and computation of TKV for PKD patients. A total of 2000 cases were used for training and validation, and 400 cases were used for testing. The multi-observer ensemble method had mean ± SD percent volume difference of 0.68 ± 2.2% compared with the reference standard segmentations. The complete framework performs fully automated segmentation at a level comparable with interobserver variability and could be considered as a replacement for the task of segmentation of PKD kidneys by a human.
Venhuizen, Freerk G; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I
2018-04-01
We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies.
Venhuizen, Freerk G.; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.
2018-01-01
We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies. PMID:29675301
Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.
2016-01-01
OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156
Segmentation of Unstructured Datasets
NASA Technical Reports Server (NTRS)
Bhat, Smitha
1996-01-01
Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.
Innovative visualization and segmentation approaches for telemedicine
NASA Astrophysics Data System (ADS)
Nguyen, D.; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet
2014-09-01
In health care applications, we obtain, manage, store and communicate using high quality, large volume of image data through integrated devices. In this paper we propose several promising methods that can assist physicians in image data process and communication. We design a new semi-automated segmentation approach for radiological images, such as CT and MRI to clearly identify the areas of interest. This approach combines the advantages from both the region-based method and boundary-based methods. It has three key steps compose: coarse segmentation by using fuzzy affinity and homogeneity operator, image division and reclassification using the Voronoi Diagram, and refining boundary lines using the level set model.
Automatically tracking neurons in a moving and deforming brain
Nguyen, Jeffrey P.; Linder, Ashley N.; Plummer, George S.; Shaevitz, Joshua W.
2017-01-01
Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animal’s brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording. The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume. Finally, thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities. The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations. When applied to whole-brain calcium imaging recordings in freely moving C. elegans, this analysis pipeline located 156 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches. PMID:28545068
Automatically tracking neurons in a moving and deforming brain.
Nguyen, Jeffrey P; Linder, Ashley N; Plummer, George S; Shaevitz, Joshua W; Leifer, Andrew M
2017-05-01
Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals. Brain motion in these recordings pose a unique challenge. The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces. Recordings from small invertebrates like C. elegans are especially challenging because they undergo very large brain motion and deformation during animal movement. Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C. elegans undergoing large motion and deformation. 3D volumetric fluorescent images of the animal's brain are straightened, aligned and registered, and the locations of neurons in the images are found via segmentation. Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding. In this approach, non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording. The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume. Finally, thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities. The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations. When applied to whole-brain calcium imaging recordings in freely moving C. elegans, this analysis pipeline located 156 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches.
Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias
2017-04-01
Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p < 0.05. Volumes did not differ significantly comparing the results of different readers. Calculation of TKV with a modified ellipsoid formula (ellipsoid volume × 0.85) did not differ significantly from the reference volume; however, the mean error was three times higher (difference of mean volumes -0.1 ml; CI -31.1 to 30.9 ml; p = 0.95). Applying the modified ellipsoid formula was the fastest way to get an estimation of the renal volume (41 s). Semi-automated segmentation and volumetric analysis of the kidney in native T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios Velazquez, E; Meier, R; Dunn, W
Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showedmore » high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.« less
Volume Segmentation and Ghost Particles
NASA Astrophysics Data System (ADS)
Ziskin, Isaac; Adrian, Ronald
2011-11-01
Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.
Segmentation of organs at risk in CT volumes of head, thorax, abdomen, and pelvis
NASA Astrophysics Data System (ADS)
Han, Miaofei; Ma, Jinfeng; Li, Yan; Li, Meiling; Song, Yanli; Li, Qiang
2015-03-01
Accurate segmentation of organs at risk (OARs) is a key step in treatment planning system (TPS) of image guided radiation therapy. We are developing three classes of methods to segment 17 organs at risk throughout the whole body, including brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin. The three classes of segmentation methods include (1) threshold-based methods for organs of large contrast with adjacent structures such as lungs, trachea, and skin; (2) context-driven Generalized Hough Transform-based methods combined with graph cut algorithm for robust localization and segmentation of liver, kidneys and spleen; and (3) atlas and registration-based methods for segmentation of heart and all organs in CT volumes of head and pelvis. The segmentation accuracy for the seventeen organs was subjectively evaluated by two medical experts in three levels of score: 0, poor (unusable in clinical practice); 1, acceptable (minor revision needed); and 2, good (nearly no revision needed). A database was collected from Ruijin Hospital, Huashan Hospital, and Xuhui Central Hospital in Shanghai, China, including 127 head scans, 203 thoracic scans, 154 abdominal scans, and 73 pelvic scans. The percentages of "good" segmentation results were 97.6%, 92.9%, 81.1%, 87.4%, 85.0%, 78.7%, 94.1%, 91.1%, 81.3%, 86.7%, 82.5%, 86.4%, 79.9%, 72.6%, 68.5%, 93.2%, 96.9% for brain, brain stem, eyes, mandible, temporomandibular joints, parotid glands, spinal cord, lungs, trachea, heart, livers, kidneys, spleen, prostate, rectum, femoral heads, and skin, respectively. Various organs at risk can be reliably segmented from CT scans by use of the three classes of segmentation methods.
Student beats the teacher: deep neural networks for lateral ventricles segmentation in brain MR
NASA Astrophysics Data System (ADS)
Ghafoorian, Mohsen; Teuwen, Jonas; Manniesing, Rashindra; Leeuw, Frank-Erik d.; van Ginneken, Bram; Karssemeijer, Nico; Platel, Bram
2018-03-01
Ventricular volume and its progression are known to be linked to several brain diseases such as dementia and schizophrenia. Therefore accurate measurement of ventricle volume is vital for longitudinal studies on these disorders, making automated ventricle segmentation algorithms desirable. In the past few years, deep neural networks have shown to outperform the classical models in many imaging domains. However, the success of deep networks is dependent on manually labeled data sets, which are expensive to acquire especially for higher dimensional data in the medical domain. In this work, we show that deep neural networks can be trained on muchcheaper-to-acquire pseudo-labels (e.g., generated by other automated less accurate methods) and still produce more accurate segmentations compared to the quality of the labels. To show this, we use noisy segmentation labels generated by a conventional region growing algorithm to train a deep network for lateral ventricle segmentation. Then on a large manually annotated test set, we show that the network significantly outperforms the conventional region growing algorithm which was used to produce the training labels for the network. Our experiments report a Dice Similarity Coefficient (DSC) of 0.874 for the trained network compared to 0.754 for the conventional region growing algorithm (p < 0.001).
Microfluidic device and method for focusing, segmenting, and dispensing of a fluid stream
Jacobson, Stephen C [Knoxville, TN; Ramsey, J Michael [Knoxville, TN
2008-09-09
A microfluidic device and method for forming and dispensing minute volume segments of a material are described. In accordance with the present invention, a microfluidic device and method are provided for spatially confining the material in a focusing element. The device is also adapted for segmenting the confined material into minute volume segments, and dispensing a volume segment to a waste or collection channel. The device further includes means for driving the respective streams of sample and focusing fluids through respective channels into a chamber, such that the focusing fluid streams spatially confine the sample material. The device may also include additional means for driving a minute volume segment of the spatially confined sample material into a collection channel in fluid communication with the waste reservoir.
Microfluidic device and method for focusing, segmenting, and dispensing of a fluid stream
Jacobson, Stephen C.; Ramsey, J. Michael
2004-09-14
A microfluidic device for forming and/or dispensing minute volume segments of a material is described. In accordance with one aspect of the present invention, a microfluidic device and method is provided for spatially confining the material in a focusing element. The device is also capable of segmenting the confined material into minute volume segments, and dispensing a volume segment to a waste or collection channel. The device further includes means for driving the respective streams of sample and focusing fluids through respective channels into a chamber, such that the focusing fluid streams spatially confine the sample material. The device may also include additional means for driving a minute volume segment of the spatially confined sample material into a collection channel in fluid communication with the waste reservoir.
Effects of dams and geomorphic context on riparian forests of the Elwha River, Washington
Shafroth, Patrick B.; Perry, Laura G; Rose, Chanoane A; Braatne, Jeffrey H
2016-01-01
Understanding how dams affect the shifting habitat mosaic of river bottomlands is key for protecting the many ecological functions and related goods and services that riparian forests provide and for informing approaches to riparian ecosystem restoration. We examined the downstream effects of two large dams on patterns of forest composition, structure, and dynamics within different geomorphic contexts and compared them to upstream reference conditions along the Elwha River, Washington, USA. Patterns of riparian vegetation in river segments downstream of the dams were driven largely by channel and bottomland geomorphic responses to a dramatically reduced sediment supply. The river segment upstream of both dams was the most geomorphically dynamic, whereas the segment between the dams was the least dynamic due to substantial channel armoring, and the segment downstream of both dams was intermediate due to some local sediment supply. These geomorphic differences were linked to altered characteristics of the shifting habitat mosaic, including older forest age structure and fewer young Populus balsamifera subsp. trichocarpa stands in the relatively static segment between the dams compared to more extensive early-successional forests (dominated by Alnus rubra and Salix spp.) and pioneer seedling recruitment upstream of the dams. Species composition of later-successional forest communities varied among river segments as well, with greater Pseudotsuga menziesii and Tsuga heterophylla abundance upstream of both dams, Acer spp. abundance between the dams, and P. balsamifera subsp. trichocarpa and Thuja plicata abundance below both dams. Riparian forest responses to the recent removal of the two dams on the Elwha River will depend largely on channel and geomorphic adjustments to the release, transport, and deposition of the large volume of sediment formerly stored in the reservoirs, together with changes in large wood dynamics.
An Inverter Packaging Scheme for an Integrated Segmented Traction Drive System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Gui-Jia; Tang, Lixin; Ayers, Curtis William
The standard voltage source inverter (VSI), widely used in electric vehicle/hybrid electric vehicle (EV/HEV) traction drives, requires a bulky dc bus capacitor to absorb the large switching ripple currents and prevent them from shortening the battery s life. The dc bus capacitor presents a significant barrier to meeting inverter cost, volume, and weight requirements for mass production of affordable EVs/HEVs. The large ripple currents become even more problematic for the film capacitors (the capacitor technology of choice for EVs/HEVs) in high temperature environments as their ripple current handling capability decreases rapidly with rising temperatures. It is shown in previous workmore » that segmenting the VSI based traction drive system can significantly decrease the ripple currents and thus the size of the dc bus capacitor. This paper presents an integrated packaging scheme to reduce the system cost of a segmented traction drive.« less
Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets.
Guadalupe, Tulio; Zwiers, Marcel P; Teumer, Alexander; Wittfeld, Katharina; Vasquez, Alejandro Arias; Hoogman, Martine; Hagoort, Peter; Fernandez, Guillen; Buitelaar, Jan; Hegenscheid, Katrin; Völzke, Henry; Franke, Barbara; Fisher, Simon E; Grabe, Hans J; Francks, Clyde
2014-07-01
Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10(-8) ). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries. Copyright © 2013 Wiley Periodicals, Inc.
Economic Analysis. Volume V. Course Segments 65-79.
ERIC Educational Resources Information Center
Sterling Inst., Washington, DC. Educational Technology Center.
The fifth volume of the multimedia, individualized course in economic analysis produced for the United States Naval Academy covers segments 65-79 of the course. Included in the volume are discussions of monopoly markets, monopolistic competition, oligopoly markets, and the theory of factor demand and supply. Other segments of the course, the…
Fallah, Faezeh; Machann, Jürgen; Martirosian, Petros; Bamberg, Fabian; Schick, Fritz; Yang, Bin
2017-04-01
To evaluate and compare conventional T1-weighted 2D turbo spin echo (TSE), T1-weighted 3D volumetric interpolated breath-hold examination (VIBE), and two-point 3D Dixon-VIBE sequences for automatic segmentation of visceral adipose tissue (VAT) volume at 3 Tesla by measuring and compensating for errors arising from intensity nonuniformity (INU) and partial volume effects (PVE). The body trunks of 28 volunteers with body mass index values ranging from 18 to 41.2 kg/m 2 (30.02 ± 6.63 kg/m 2 ) were scanned at 3 Tesla using three imaging techniques. Automatic methods were applied to reduce INU and PVE and to segment VAT. The automatically segmented VAT volumes obtained from all acquisitions were then statistically and objectively evaluated against the manually segmented (reference) VAT volumes. Comparing the reference volumes with the VAT volumes automatically segmented over the uncorrected images showed that INU led to an average relative volume difference of -59.22 ± 11.59, 2.21 ± 47.04, and -43.05 ± 5.01 % for the TSE, VIBE, and Dixon images, respectively, while PVE led to average differences of -34.85 ± 19.85, -15.13 ± 11.04, and -33.79 ± 20.38 %. After signal correction, differences of -2.72 ± 6.60, 34.02 ± 36.99, and -2.23 ± 7.58 % were obtained between the reference and the automatically segmented volumes. A paired-sample two-tailed t test revealed no significant difference between the reference and automatically segmented VAT volumes of the corrected TSE (p = 0.614) and Dixon (p = 0.969) images, but showed a significant VAT overestimation using the corrected VIBE images. Under similar imaging conditions and spatial resolution, automatically segmented VAT volumes obtained from the corrected TSE and Dixon images agreed with each other and with the reference volumes. These results demonstrate the efficacy of the signal correction methods and the similar accuracy of TSE and Dixon imaging for automatic volumetry of VAT at 3 Tesla.
Denoising and 4D visualization of OCT images
Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.
2009-01-01
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509
Accurate airway segmentation based on intensity structure analysis and graph-cut
NASA Astrophysics Data System (ADS)
Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku
2016-03-01
This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stützer, Kristin; Haase, Robert; Exner, Florian
2016-09-15
Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less
Multi-Modal Glioblastoma Segmentation: Man versus Machine
Pica, Alessia; Schucht, Philippe; Beck, Jürgen; Verma, Rajeev Kumar; Slotboom, Johannes; Reyes, Mauricio; Wiest, Roland
2014-01-01
Background and Purpose Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. Methods We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. Results Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. Conclusions In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity. PMID:24804720
Malherbe, Stephanus T; Dupont, Patrick; Kant, Ilse; Ahlers, Petri; Kriel, Magdalena; Loxton, André G; Chen, Ray Y; Via, Laura E; Thienemann, Friedrich; Wilkinson, Robert J; Barry, Clifton E; Griffith-Richards, Stephanie; Ellman, Annare; Ronacher, Katharina; Winter, Jill; Walzl, Gerhard; Warwick, James M
2018-06-25
There is a growing interest in the use of 18 F-FDG PET-CT to monitor tuberculosis (TB) treatment response. However, TB causes complex and widespread pathology, which is challenging to segment and quantify in a reproducible manner. To address this, we developed a technique to standardise uptake (Z-score), segment and quantify tuberculous lung lesions on PET and CT concurrently, in order to track changes over time. We used open source tools and created a MATLAB script. The technique was optimised on a training set of five pulmonary tuberculosis (PTB) cases after standard TB therapy and 15 control patients with lesion-free lungs. We compared the proposed method to a fixed threshold (SUV > 1) and manual segmentation by two readers and piloted the technique successfully on scans of five control patients and five PTB cases (four cured and one failed treatment case), at diagnosis and after 1 and 6 months of treatment. There was a better correlation between the Z-score-based segmentation and manual segmentation than SUV > 1 and manual segmentation in terms of overall spatial overlap (measured in Dice similarity coefficient) and specificity (1 minus false positive volume fraction). However, SUV > 1 segmentation appeared more sensitive. Both the Z-score and SUV > 1 showed very low variability when measuring change over time. In addition, total glycolytic activity, calculated using segmentation by Z-score and lesion-to-background ratio, correlated well with traditional total glycolytic activity calculations. The technique quantified various PET and CT parameters, including the total glycolytic activity index, metabolic lesion volume, lesion volumes at different CT densities and combined PET and CT parameters. The quantified metrics showed a marked decrease in the cured cases, with changes already apparent at month one, but remained largely unchanged in the failed treatment case. Our technique is promising to segment and quantify the lung scans of pulmonary tuberculosis patients in a semi-automatic manner, appropriate for measuring treatment response. Further validation is required in larger cohorts.
SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
2014-06-01
Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less
Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin
2017-01-01
Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.
Du, Yiping P; Jin, Zhaoyang
2009-10-01
To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.
A fast 3D region growing approach for CT angiography applications
NASA Astrophysics Data System (ADS)
Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang
2004-05-01
Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.
A multiball read-out for the spherical proportional counter
NASA Astrophysics Data System (ADS)
Giganon, A.; Giomataris, I.; Gros, M.; Katsioulas, I.; Navick, X. F.; Tsiledakis, G.; Savvidis, I.; Dastgheibi-Fard, A.; Brossard, A.
2017-12-01
We present a novel concept of proportional gas amplification for the read-out of the spherical proportional counter. The standard single-ball read-out presents limitations for large diameter spherical detectors and high-pressure operations. We have developed a multi-ball read-out system which consists of several balls placed at a fixed distance from the center of the spherical vessel. Such a module can tune the volume electric field at the desired value and can also provide detector segmentation with individual ball read-out. In the latter case, the large volume of the vessel becomes a spherical time projection chamber with 3D capabilities.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-12-01
A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic knee cartilage delineation using inheritable segmentation
NASA Astrophysics Data System (ADS)
Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.
2008-03-01
We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.
Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.
Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121
Mastmeyer, André; Engelke, Klaus; Fuchs, Christina; Kalender, Willi A
2006-08-01
We have developed a new hierarchical 3D technique to segment the vertebral bodies in order to measure bone mineral density (BMD) with high trueness and precision in volumetric CT datasets. The hierarchical approach starts with a coarse separation of the individual vertebrae, applies a variety of techniques to segment the vertebral bodies with increasing detail and ends with the definition of an anatomic coordinate system for each vertebral body, relative to which up to 41 trabecular and cortical volumes of interest are positioned. In a pre-segmentation step constraints consisting of Boolean combinations of simple geometric shapes are determined that enclose each individual vertebral body. Bound by these constraints viscous deformable models are used to segment the main shape of the vertebral bodies. Volume growing and morphological operations then capture the fine details of the bone-soft tissue interface. In the volumes of interest bone mineral density and content are determined. In addition, in the segmented vertebral bodies geometric parameters such as volume or the length of the main axes of inertia can be measured. Intra- and inter-operator precision errors of the segmentation procedure were analyzed using existing clinical patient datasets. Results for segmented volume, BMD, and coordinate system position were below 2.0%, 0.6%, and 0.7%, respectively. Trueness was analyzed using phantom scans. The bias of the segmented volume was below 4%; for BMD it was below 1.5%. The long-term goal of this work is improved fracture prediction and patient monitoring in the field of osteoporosis. A true 3D segmentation also enables an accurate measurement of geometrical parameters that may augment the clinical value of a pure BMD analysis.
Carbone, V; Fluit, R; Pellikaan, P; van der Krogt, M M; Janssen, D; Damsgaard, M; Vigneron, L; Feilkas, T; Koopman, H F J M; Verdonschot, N
2015-03-18
When analyzing complex biomechanical problems such as predicting the effects of orthopedic surgery, subject-specific musculoskeletal models are essential to achieve reliable predictions. The aim of this paper is to present the Twente Lower Extremity Model 2.0, a new comprehensive dataset of the musculoskeletal geometry of the lower extremity, which is based on medical imaging data and dissection performed on the right lower extremity of a fresh male cadaver. Bone, muscle and subcutaneous fat (including skin) volumes were segmented from computed tomography and magnetic resonance images scans. Inertial parameters were estimated from the image-based segmented volumes. A complete cadaver dissection was performed, in which bony landmarks, attachments sites and lines-of-action of 55 muscle actuators and 12 ligaments, bony wrapping surfaces, and joint geometry were measured. The obtained musculoskeletal geometry dataset was finally implemented in the AnyBody Modeling System (AnyBody Technology A/S, Aalborg, Denmark), resulting in a model consisting of 12 segments, 11 joints and 21 degrees of freedom, and including 166 muscle-tendon elements for each leg. The new TLEM 2.0 dataset was purposely built to be easily combined with novel image-based scaling techniques, such as bone surface morphing, muscle volume registration and muscle-tendon path identification, in order to obtain subject-specific musculoskeletal models in a quick and accurate way. The complete dataset, including CT and MRI scans and segmented volume and surfaces, is made available at http://www.utwente.nl/ctw/bw/research/projects/TLEMsafe for the biomechanical community, in order to accelerate the development and adoption of subject-specific models on large scale. TLEM 2.0 is freely shared for non-commercial use only, under acceptance of the TLEMsafe Research License Agreement. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dolz, Jose; Laprie, Anne; Ken, Soléakhéna; Leroy, Henri-Arthur; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien
2016-01-01
To constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI). SVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours. Mean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below 1.5 cm(3), where the value for best performing IIVs configuration was 0.85 cm(3), representing an absolute mean difference of 3.99% with respect to the manual segmented volumes. Results suggest consistent volume estimation and high spatial similarity with respect to expert delineations. The proposed approach outperformed presented methods to segment the brainstem, not only in volume similarity metrics, but also in segmentation time. Preliminary results showed that the approach might be promising for adoption in clinical use.
Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark
2016-11-01
Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Summers, Ronald M.
2013-03-01
Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis (CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%+/-3.5%, and that of false positives is 5.5%+/-4.2%.
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla
2017-02-01
The present study aimed to elucidate if comparison of angular segments of Pigment epithelium central limit- Inner limit of the retina Minimal Distance, measured over 2π radians in the frontal plane (PIMD-2π) between visits of a patient, renders sufficient precision for detection of loss of nerve fibers in the optic nerve head. An optic nerve head raster scanned cube was captured with a TOPCON 3D OCT 2000 (Topcon, Japan) device in one early to moderate stage glaucoma eye of each of 13 patients. All eyes were recorded at two visits less than 1 month apart. At each visit, 3 volumes were captured. Each volume was extracted from the OCT device for analysis. Then, angular PIMD was segmented three times over 2π radians in the frontal plane, resolved with a semi-automatic algorithm in 500 equally separated steps, PIMD-2π. It was found that individual segmentations within volumes, within visits, within subjects can be phase adjusted to each other in the frontal plane using cross-correlation. Cross correlation was also used to phase adjust volumes within visits within subjects and visits to each other within subjects. Then, PIMD-2π for each subject was split into 250 bundles of 2 adjacent PIMDs. Finally, the sources of variation for estimates of segments of PIMD-2π were derived with analysis of variance assuming a mixed model. The variation among adjacent PIMDS was found very small in relation to the variation among segmentations. The variation among visits was found insignificant in relation to the variation among volumes and the variance for segmentations was found to be on the order of 20 % of that for volumes. The estimated variances imply that, if 3 segmentations are averaged within a volume and at least 10 volumes are averaged within a visit, it is possible to estimate around a 10 % reduction of a PIMD-2π segment from baseline to a subsequent visit as significant. Considering a loss rate for a PIMD-2π segment of 23 μm/yr., 4 visits per year, and averaging 3 segmentations per volume and 3 volumes per visit, a significant reduction from baseline can be detected with a power of 80 % in about 18 months. At higher loss rate for a PIMD-2π segment, a significant difference from baseline can be detected earlier. Averaging over more volumes per visit considerably decreases the time for detection of a significant reduction of a segment of PIMD-2π. Increasing the number of segmentations averaged per visit only slightly reduces the time for detection of a significant reduction. It is concluded that phase adjustment in the frontal plane with cross correlation allows high precision estimates of a segment of PIMD-2π that imply substantially shorter followup time for detection of a significant change than mean deviation (MD) in a visual field estimated with the Humphrey perimeter or neural rim area (NRA) estimated with the Heidelberg retinal tomograph.
Vessel segmentation in 3D spectral OCT scans of the retina
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Garvin, Mona K.; van Ginneken, Bram; Sonka, Milan; Abràmoff, Michael D.
2008-03-01
The latest generation of spectral optical coherence tomography (OCT) scanners is able to image 3D cross-sectional volumes of the retina at a high resolution and high speed. These scans offer a detailed view of the structure of the retina. Automated segmentation of the vessels in these volumes may lead to more objective diagnosis of retinal vascular disease including hypertensive retinopathy, retinopathy of prematurity. Additionally, vessel segmentation can allow color fundus images to be registered to these 3D volumes, possibly leading to a better understanding of the structure and localization of retinal structures and lesions. In this paper we present a method for automatically segmenting the vessels in a 3D OCT volume. First, the retina is automatically segmented into multiple layers, using simultaneous segmentation of their boundary surfaces in 3D. Next, a 2D projection of the vessels is produced by only using information from certain segmented layers. Finally, a supervised, pixel classification based vessel segmentation approach is applied to the projection image. We compared the influence of two methods for the projection on the performance of the vessel segmentation on 10 optic nerve head centered 3D OCT scans. The method was trained on 5 independent scans. Using ROC analysis, our proposed vessel segmentation system obtains an area under the curve of 0.970 when compared with the segmentation of a human observer.
The algorithm study for using the back propagation neural network in CT image segmentation
NASA Astrophysics Data System (ADS)
Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi
2017-01-01
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.
A quantification strategy for missing bone mass in case of osteolytic bone lesions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fränzle, Andrea, E-mail: a.fraenzle@dkfz.de; Giske, Kristina; Bretschi, Maren
Purpose: Most of the patients who died of breast cancer have developed bone metastases. To understand the pathogenesis of bone metastases and to analyze treatment response of different bone remodeling therapies, preclinical animal models are examined. In breast cancer, bone metastases are often bone destructive. To assess treatment response of bone remodeling therapies, the volumes of these lesions have to be determined during the therapy process. The manual delineation of missing structures, especially if large parts are missing, is very time-consuming and not reproducible. Reproducibility is highly important to have comparable results during the therapy process. Therefore, a computerized approachmore » is needed. Also for the preclinical research, a reproducible measurement of the lesions is essential. Here, the authors present an automated segmentation method for the measurement of missing bone mass in a preclinical rat model with bone metastases in the hind leg bones based on 3D CT scans. Methods: The affected bone structure is compared to a healthy model. Since in this preclinical rat trial the metastasis only occurs on the right hind legs, which is assured by using vessel clips, the authors use the left body side as a healthy model. The left femur is segmented with a statistical shape model which is initialised using the automatically segmented medullary cavity. The left tibia and fibula are segmented using volume growing starting at the tibia medullary cavity and stopping at the femur boundary. Masked images of both segmentations are mirrored along the median plane and transferred manually to the position of the affected bone by rigid registration. Affected bone and healthy model are compared based on their gray values. If the gray value of a voxel indicates bone mass in the healthy model and no bone in the affected bone, this voxel is considered to be osteolytic. Results: The lesion segmentations complete the missing bone structures in a reasonable way. The mean ratiov{sub r}/v{sub m} of the reconstructed bone volume v{sub r} and the healthy model bone volume v{sub m} is 1.07, which indicates a good reconstruction of the modified bone. Conclusions: The qualitative and quantitative comparison of manual and semi-automated segmentation results have shown that comparing a modified bone structure with a healthy model can be used to identify and measure missing bone mass in a reproducible way.« less
Knowledge-based segmentation of pediatric kidneys in CT for measuring parenchymal volume
NASA Astrophysics Data System (ADS)
Brown, Matthew S.; Feng, Waldo C.; Hall, Theodore R.; McNitt-Gray, Michael F.; Churchill, Bernard M.
2000-06-01
The purpose of this work was to develop an automated method for segmenting pediatric kidneys in contrast-enhanced helical CT images and measuring the volume of the renal parenchyma. An automated system was developed to segment the abdomen, spine, aorta and kidneys. The expected size, shape, topology an X-ray attenuation of anatomical structures are stored as features in an anatomical model. These features guide 3-D threshold-based segmentation and then matching of extracted image regions to anatomical structures in the model. Following segmentation, the kidney volumes are calculated by summing included voxels. To validate the system, the kidney volumes of 4 swine were calculated using our approach and compared to the 'true' volumes measured after harvesting the kidneys. Automated volume calculations were also performed retrospectively in a cohort of 10 children. The mean difference between the calculated and measured values in the swine kidneys was 1.38 (S.D. plus or minus 0.44) cc. For the pediatric cases, calculated volumes ranged from 41.7 - 252.1 cc/kidney, and the mean ratio of right to left kidney volume was 0.96 (S.D. plus or minus 0.07). These results demonstrate the accuracy of the volumetric technique that may in the future provide an objective assessment of renal damage.
Hatt, Mathieu; Laurent, Baptiste; Fayad, Hadi; Jaouen, Vincent; Visvikis, Dimitris; Le Rest, Catherine Cheze
2018-04-01
Sphericity has been proposed as a parameter for characterizing PET tumour volumes, with complementary prognostic value with respect to SUV and volume in both head and neck cancer and lung cancer. The objective of the present study was to investigate its dependency on tumour delineation and the resulting impact on its prognostic value. Five segmentation methods were considered: two thresholds (40% and 50% of SUV max ), ant colony optimization, fuzzy locally adaptive Bayesian (FLAB), and gradient-aided region-based active contour. The accuracy of each method in extracting sphericity was evaluated using a dataset of 176 simulated, phantom and clinical PET images of tumours with associated ground truth. The prognostic value of sphericity and its complementary value with respect to volume for each segmentation method was evaluated in a cohort of 87 patients with stage II/III lung cancer. Volume and associated sphericity values were dependent on the segmentation method. The correlation between segmentation accuracy and sphericity error was moderate (|ρ| from 0.24 to 0.57). The accuracy in measuring sphericity was not dependent on volume (|ρ| < 0.4). In the patients with lung cancer, sphericity had prognostic value, although lower than that of volume, except for that derived using FLAB for which when combined with volume showed a small improvement over volume alone (hazard ratio 2.67, compared with 2.5). Substantial differences in patient prognosis stratification were observed depending on the segmentation method used. Tumour functional sphericity was found to be dependent on the segmentation method, although the accuracy in retrieving the true sphericity was not dependent on tumour volume. In addition, even accurate segmentation can lead to an inaccurate sphericity value, and vice versa. Sphericity had similar or lower prognostic value than volume alone in the patients with lung cancer, except when determined using the FLAB method for which there was a small improvement in stratification when the parameters were combined.
Karimi, Davood; Samei, Golnoosh; Kesch, Claudia; Nir, Guy; Salcudean, Septimiu E
2018-05-15
Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.
A novel approach to segmentation and measurement of medical image using level set methods.
Chen, Yao-Tien
2017-06-01
The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Accuracy of cancellous bone volume fraction measured by micro-CT scanning.
Ding, M; Odgaard, A; Hvid, I
1999-03-01
Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas
NASA Astrophysics Data System (ADS)
Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.
2017-11-01
Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grilo, Clara, E-mail: clarabentesgrilo@gmail.com; Centro Brasileiro de Estudos em Ecologia de Estradas, Departamento de Biologia, Universidade Federal de Lavras, Campus Universitário, 37200-000 Lavras, Minas Gerais; Ferreira, Flavio Zanchetta
Previous studies have found that the relationship between wildlife road mortality and traffic volume follows a threshold effect on low traffic volume roads. We aimed at evaluating the response of several species to increasing traffic intensity on highways over a large geographic area and temporal period. We used data of four terrestrial vertebrate species with different biological and ecological features known by their high road-kill rates: the barn owl (Tyto alba), hedgehog (Erinaceus europaeus), red fox (Vulpes vulpes) and European rabbit (Oryctolagus cuniculus). Additionally, we checked whether road-kill likelihood varies when traffic patterns depart from the average. We used annualmore » average daily traffic (AADT) and road-kill records observed along 1000 km of highways in Portugal over seven consecutive years (2003–2009). We fitted candidate models using Generalized Linear Models with a binomial distribution through a sample unit of 1 km segments to describe the effect of traffic on the probability of finding at least one victim in each segment during the study. We also assigned for each road-kill record the traffic of that day and the AADT on that year to test for differences using Paired Student's t-test. Mortality risk declined significantly with traffic volume but varied among species: the probability of finding road-killed red foxes and rabbits occurs up to moderate traffic volumes (< 20,000 AADT) whereas barn owls and hedgehogs occurred up to higher traffic volumes (40,000 AADT). Perception of risk may explain differences in responses towards high traffic highway segments. Road-kill rates did not vary significantly when traffic intensity departed from the average. In summary, we did not find evidence of traffic thresholds for the analysed species and traffic intensities. We suggest mitigation measures to reduce mortality be applied in particular on low traffic roads (< 5000 AADT) while additional measures to reduce barrier effects should take into account species-specific behavioural traits. - Highlights: • Traffic and road-kills were analysed along 1000 km of highways over seven years. • Mortality risk declined significantly with traffic volume. • Perception of risk may explain different responses towards high traffic sections. • Reducing barrier effects should take into account species behavioural traits.« less
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
2011-01-01
Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/. PMID:21668958
NASA Technical Reports Server (NTRS)
Agnew, Donald L.; Jones, Peter A.
1989-01-01
A study was conducted to define reasonable and representative large deployable reflector (LDR) system concepts for the purpose of defining a technology development program aimed at providing the requisite technological capability necessary to start LDR development by the end of 1991. This volume includes the executive summary for the total study, a report of thirteen system analysis and trades tasks (optical configuration, aperture size, reflector material, segmented mirror, optical subsystem, thermal, pointing and control, transportation to orbit, structures, contamination control, orbital parameters, orbital environment, and spacecraft functions), and descriptions of three selected LDR system concepts. Supporting information is contained in appendices.
Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing
NASA Astrophysics Data System (ADS)
Ballangan, Cherry; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Feng, Dagan
2010-03-01
We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features; hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a volumetric overlap fraction of 0.61 +/- 0.13 which outperformed four other methods where the overlap fraction varied from 0.40 +/- 0.24 to 0.59 +/- 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false negative and 15 were false positive.
Preliminary results in large bone segmentation from 3D freehand ultrasound
NASA Astrophysics Data System (ADS)
Fanti, Zian; Torres, Fabian; Arámbula Cosío, Fernando
2013-11-01
Computer Assisted Orthopedic Surgery (CAOS) requires a correct registration between the patient in the operating room and the virtual models representing the patient in the computer. In order to increase the precision and accuracy of the registration a set of new techniques that eliminated the need to use fiducial markers have been developed. The majority of these newly developed registration systems are based on costly intraoperative imaging systems like Computed Tomography (CT scan) or Magnetic resonance imaging (MRI). An alternative to these methods is the use of an Ultrasound (US) imaging system for the implementation of a more cost efficient intraoperative registration solution. In order to develop the registration solution with the US imaging system, the bone surface is segmented in both preoperative and intraoperative images, and the registration is done using the acquire surface. In this paper, we present the a preliminary results of a new approach to segment bone surface from ultrasound volumes acquired by means 3D freehand ultrasound. The method is based on the enhancement of the voxels that belongs to surface and its posterior segmentation. The enhancement process is based on the information provided by eigenanalisis of the multiscale 3D Hessian matrix. The preliminary results shows that from the enhance volume the final bone surfaces can be extracted using a singular value thresholding.
Unsupervised fuzzy segmentation of 3D magnetic resonance brain images
NASA Astrophysics Data System (ADS)
Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.
1993-07-01
Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.
Validation of Body Volume Acquisition by Using Elliptical Zone Method.
Chiu, C-Y; Pease, D L; Fawkner, S; Sanders, R H
2016-12-01
The elliptical zone method (E-Zone) can be used to obtain reliable body volume data including total body volume and segmental volumes with inexpensive and portable equipment. The purpose of this research was to assess the accuracy of body volume data obtained from E-Zone by comparing them with those acquired from the 3D photonic scanning method (3DPS). 17 male participants with diverse somatotypes were recruited. Each participant was scanned twice on the same day by a 3D whole-body scanner and photographed twice for the E-Zone analysis. The body volume data acquired from 3DPS was regarded as the reference against which the accuracy of the E-Zone was assessed. The relative technical error of measurement (TEM) of total body volume estimations was around 3% for E-Zone. E-Zone can estimate the segmental volumes of upper torso, lower torso, thigh, shank, upper arm and lower arm accurately (relative TEM<10%) but the accuracy for small segments including the neck, hand and foot were poor. In summary, E-Zone provides a reliable, inexpensive, portable, and simple method to obtain reasonable estimates of total body volume and to indicate segmental volume distribution. © Georg Thieme Verlag KG Stuttgart · New York.
Serag, Ahmed; Wilkinson, Alastair G.; Telford, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Anblagan, Devasuda; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.
2017-01-01
Quantitative volumes from brain magnetic resonance imaging (MRI) acquired across the life course may be useful for investigating long term effects of risk and resilience factors for brain development and healthy aging, and for understanding early life determinants of adult brain structure. Therefore, there is an increasing need for automated segmentation tools that can be applied to images acquired at different life stages. We developed an automatic segmentation method for human brain MRI, where a sliding window approach and a multi-class random forest classifier were applied to high-dimensional feature vectors for accurate segmentation. The method performed well on brain MRI data acquired from 179 individuals, analyzed in three age groups: newborns (38–42 weeks gestational age), children and adolescents (4–17 years) and adults (35–71 years). As the method can learn from partially labeled datasets, it can be used to segment large-scale datasets efficiently. It could also be applied to different populations and imaging modalities across the life course. PMID:28163680
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen
2013-10-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van
2013-01-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521
Uehara, Erica; Deguchi, Tetsuo
2017-12-07
We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2017-12-01
We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.
Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
Training labels for hippocampal segmentation based on the EADC-ADNI harmonized hippocampal protocol.
Boccardi, Marina; Bocchetta, Martina; Morency, Félix C; Collins, D Louis; Nishikawa, Masami; Ganzola, Rossana; Grothe, Michel J; Wolf, Dominik; Redolfi, Alberto; Pievani, Michela; Antelmi, Luigi; Fellgiebel, Andreas; Matsuda, Hiroshi; Teipel, Stefan; Duchesne, Simon; Jack, Clifford R; Frisoni, Giovanni B
2015-02-01
The European Alzheimer's Disease Consortium and Alzheimer's Disease Neuroimaging Initiative (ADNI) Harmonized Protocol (HarP) is a Delphi definition of manual hippocampal segmentation from magnetic resonance imaging (MRI) that can be used as the standard of truth to train new tracers, and to validate automated segmentation algorithms. Training requires large and representative data sets of segmented hippocampi. This work aims to produce a set of HarP labels for the proper training and certification of tracers and algorithms. Sixty-eight 1.5 T and 67 3 T volumetric structural ADNI scans from different subjects, balanced by age, medial temporal atrophy, and scanner manufacturer, were segmented by five qualified HarP tracers whose absolute interrater intraclass correlation coefficients were 0.953 and 0.975 (left and right). Labels were validated as HarP compliant through centralized quality check and correction. Hippocampal volumes (mm(3)) were as follows: controls: left = 3060 (standard deviation [SD], 502), right = 3120 (SD, 897); mild cognitive impairment (MCI): left = 2596 (SD, 447), right = 2686 (SD, 473); and Alzheimer's disease (AD): left = 2301 (SD, 492), right = 2445 (SD, 525). Volumes significantly correlated with atrophy severity at Scheltens' scale (Spearman's ρ = <-0.468, P = <.0005). Cerebrospinal fluid spaces (mm(3)) were as follows: controls: left = 23 (32), right = 25 (25); MCI: left = 15 (13), right = 22 (16); and AD: left = 11 (13), right = 20 (25). Five subjects (3.7%) presented with unusual anatomy. This work provides reference hippocampal labels for the training and certification of automated segmentation algorithms. The publicly released labels will allow the widespread implementation of the standard segmentation protocol. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
Quantification of intraventricular blood clot in MR-guided focused ultrasound surgery
NASA Astrophysics Data System (ADS)
Hess, Maggie; Looi, Thomas; Lasso, Andras; Fichtinger, Gabor; Drake, James
2015-03-01
Intraventricular hemorrhage (IVH) affects nearly 15% of preterm infants. It can lead to ventricular dilation and cognitive impairment. To ablate IVH clots, MR-guided focused ultrasound surgery (MRgFUS) is investigated. This procedure requires accurate, fast and consistent quantification of ventricle and clot volumes. We developed a semi-autonomous segmentation (SAS) algorithm for measuring changes in the ventricle and clot volumes. Images are normalized, and then ventricle and clot masks are registered to the images. Voxels of the registered masks and voxels obtained by thresholding the normalized images are used as seed points for competitive region growing, which provides the final segmentation. The user selects the areas of interest for correspondence after thresholding and these selections are the final seeds for region growing. SAS was evaluated on an IVH porcine model. SAS was compared to ground truth manual segmentation (MS) for accuracy, efficiency, and consistency. Accuracy was determined by comparing clot and ventricle volumes produced by SAS and MS, and comparing contours by calculating 95% Hausdorff distances between the two labels. In Two-One-Sided Test, SAS and MS were found to be significantly equivalent (p < 0.01). SAS on average was found to be 15 times faster than MS (p < 0.01). Consistency was determined by repeated segmentation of the same image by both SAS and manual methods, SAS being significantly more consistent than MS (p < 0.05). SAS is a viable method to quantify the IVH clot and the lateral brain ventricles and it is serving in a large-scale porcine study of MRgFUS treatment of IVH clot lysis.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
Large deep neural networks for MS lesion segmentation
NASA Astrophysics Data System (ADS)
Prieto, Juan C.; Cavallari, Michele; Palotai, Miklos; Morales Pinzon, Alfredo; Egorova, Svetlana; Styner, Martin; Guttmann, Charles R. G.
2017-02-01
Multiple sclerosis (MS) is a multi-factorial autoimmune disorder, characterized by spatial and temporal dissemination of brain lesions that are visible in T2-weighted and Proton Density (PD) MRI. Assessment of lesion burden and is useful for monitoring the course of the disease, and assessing correlates of clinical outcomes. Although there are established semi-automated methods to measure lesion volume, most of them require human interaction and editing, which are time consuming and limits the ability to analyze large sets of data with high accuracy. The primary objective of this work is to improve existing segmentation algorithms and accelerate the time consuming operation of identifying and validating MS lesions. In this paper, a Deep Neural Network for MS Lesion Segmentation is implemented. The MS lesion samples are extracted from the Partners Comprehensive Longitudinal Investigation of Multiple Sclerosis (CLIMB) study. A set of 900 subjects with T2, PD and a manually corrected label map images were used to train a Deep Neural Network and identify MS lesions. Initial tests using this network achieved a 90% accuracy rate. A secondary goal was to enable this data repository for big data analysis by using this algorithm to segment the remaining cases available in the CLIMB repository.
Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy
2016-08-01
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation
Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.
2015-01-01
Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117
Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer
NASA Astrophysics Data System (ADS)
Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.
2016-04-01
Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.
Semi-automated brain tumor and edema segmentation using MRI.
Xie, Kai; Yang, Jie; Zhang, Z G; Zhu, Y M
2005-10-01
Manual segmentation of brain tumors from magnetic resonance images is a challenging and time-consuming task. A semi-automated method has been developed for brain tumor and edema segmentation that will provide objective, reproducible segmentations that are close to the manual results. Additionally, the method segments non-enhancing brain tumor and edema from healthy tissues in magnetic resonance images. In this study, a semi-automated method was developed for brain tumor and edema segmentation and volume measurement using magnetic resonance imaging (MRI). Some novel algorithms for tumor segmentation from MRI were integrated in this medical diagnosis system. We exploit a hybrid level set (HLS) segmentation method driven by region and boundary information simultaneously, region information serves as a propagation force which is robust and boundary information serves as a stopping functional which is accurate. Ten different patients with brain tumors of different size, shape and location were selected, a total of 246 axial tumor-containing slices obtained from 10 patients were used to evaluate the effectiveness of segmentation methods. This method was applied to 10 non-enhancing brain tumors and satisfactory results were achieved. Two quantitative measures for tumor segmentation quality estimation, namely, correspondence ratio (CR) and percent matching (PM), were performed. For the segmentation of brain tumor, the volume total PM varies from 79.12 to 93.25% with the mean of 85.67+/-4.38% while the volume total CR varies from 0.74 to 0.91 with the mean of 0.84+/-0.07. For the segmentation of edema, the volume total PM varies from 72.86 to 87.29% with the mean of 79.54+/-4.18% while the volume total CR varies from 0.69 to 0.85 with the mean of 0.79+/-0.08. The HLS segmentation method perform better than the classical level sets (LS) segmentation method in PM and CR. The results of this research may have potential applications, both as a staging procedure and a method of evaluating tumor response during treatment, this method can be used as a clinical image analysis tool for doctors or radiologists.
Thomas, Marianna S; Newman, David; Leinhard, Olof Dahlqvist; Kasmai, Bahman; Greenwood, Richard; Malcolm, Paul N; Karlsson, Anette; Rosander, Johannes; Borga, Magnus; Toms, Andoni P
2014-09-01
To measure the test-retest reproducibility of an automated system for quantifying whole body and compartmental muscle volumes using wide bore 3 T MRI. Thirty volunteers stratified by body mass index underwent whole body 3 T MRI, two-point Dixon sequences, on two separate occasions. Water-fat separation was performed, with automated segmentation of whole body, torso, upper and lower leg volumes, and manually segmented lower leg muscle volumes. Mean automated total body muscle volume was 19·32 L (SD9·1) and 19·28 L (SD9·12) for first and second acquisitions (Intraclass correlation coefficient (ICC) = 1·0, 95% level of agreement -0·32-0·2 L). ICC for all automated test-retest muscle volumes were almost perfect (0·99-1·0) with 95% levels of agreement 1.8-6.6% of mean volume. Automated muscle volume measurements correlate closely with manual quantification (right lower leg: manual 1·68 L (2SD0·6) compared to automated 1·64 L (2SD 0·6), left lower leg: manual 1·69 L (2SD 0·64) compared to automated 1·63 L (SD0·61), correlation coefficients for automated and manual segmentation were 0·94-0·96). Fully automated whole body and compartmental muscle volume quantification can be achieved rapidly on a 3 T wide bore system with very low margins of error, excellent test-retest reliability and excellent correlation to manual segmentation in the lower leg. Sarcopaenia is an important reversible complication of a number of diseases. Manual quantification of muscle volume is time-consuming and expensive. Muscles can be imaged using in and out of phase MRI. Automated atlas-based segmentation can identify muscle groups. Automated muscle volume segmentation is reproducible and can replace manual measurements.
Fuzzy object models for newborn brain MR image segmentation
NASA Astrophysics Data System (ADS)
Kobashi, Syoji; Udupa, Jayaram K.
2013-03-01
Newborn brain MR image segmentation is a challenging problem because of variety of size, shape and MR signal although it is the fundamental study for quantitative radiology in brain MR images. Because of the large difference between the adult brain and the newborn brain, it is difficult to directly apply the conventional methods for the newborn brain. Inspired by the original fuzzy object model introduced by Udupa et al. at SPIE Medical Imaging 2011, called fuzzy shape object model (FSOM) here, this paper introduces fuzzy intensity object model (FIOM), and proposes a new image segmentation method which combines the FSOM and FIOM into fuzzy connected (FC) image segmentation. The fuzzy object models are built from training datasets in which the cerebral parenchyma is delineated by experts. After registering FSOM with the evaluating image, the proposed method roughly recognizes the cerebral parenchyma region based on a prior knowledge of location, shape, and the MR signal given by the registered FSOM and FIOM. Then, FC image segmentation delineates the cerebral parenchyma using the fuzzy object models. The proposed method has been evaluated using 9 newborn brain MR images using the leave-one-out strategy. The revised age was between -1 and 2 months. Quantitative evaluation using false positive volume fraction (FPVF) and false negative volume fraction (FNVF) has been conducted. Using the evaluation data, a FPVF of 0.75% and FNVF of 3.75% were achieved. More data collection and testing are underway.
Comparison of in vivo 3D cone-beam computed tomography tooth volume measurement protocols.
Forst, Darren; Nijjar, Simrit; Flores-Mir, Carlos; Carey, Jason; Secanell, Marc; Lagravere, Manuel
2014-12-23
The objective of this study is to analyze a set of previously developed and proposed image segmentation protocols for precision in both intra- and inter-rater reliability for in vivo tooth volume measurements using cone-beam computed tomography (CBCT) images. Six 3D volume segmentation procedures were proposed and tested for intra- and inter-rater reliability to quantify maxillary first molar volumes. Ten randomly selected maxillary first molars were measured in vivo in random order three times with 10 days separation between measurements. Intra- and inter-rater agreement for all segmentation procedures was attained using intra-class correlation coefficient (ICC). The highest precision was for automated thresholding with manual refinements. A tooth volume measurement protocol for CBCT images employing automated segmentation with manual human refinement on a 2D slice-by-slice basis in all three planes of space possessed excellent intra- and inter-rater reliability. Three-dimensional volume measurements of the entire tooth structure are more precise than 3D volume measurements of only the dental roots apical to the cemento-enamel junction (CEJ).
Three-Dimensional Eyeball and Orbit Volume Modification After LeFort III Midface Distraction.
Smektala, Tomasz; Nysjö, Johan; Thor, Andreas; Homik, Aleksandra; Sporniak-Tutak, Katarzyna; Safranow, Krzysztof; Dowgierd, Krzysztof; Olszewski, Raphael
2015-07-01
The aim of our study was to evaluate orbital volume modification with LeFort III midface distraction in patients with craniosynostosis and its influence on eyeball volume and axial diameter modification. Orbital volume was assessed by the semiautomatic segmentation method based on deformable surface models and on 3-dimensional (3D) interaction with haptics. The eyeball volumes and diameters were automatically calculated after manual segmentation of computed tomographic scans with 3D slicer software. The mean, minimal, and maximal differences as well as the standard deviation and intraclass correlation coefficient (ICC) for intraobserver and interobserver measurements reliability were calculated. The Wilcoxon signed rank test was used to compare measured values before and after surgery. P < 0.05 was considered statistically significant. Intraobserver and interobserver ICC for haptic-aided semiautomatic orbital volume measurements were 0.98 and 0.99, respectively. The intraobserver and interobserver ICC values for manual segmentation of the eyeball volume were 0.87 and 0.86, respectively. The orbital volume increased significantly after surgery: 30.32% (mean, 5.96 mL) for the left orbit and 31.04% (mean, 6.31 mL) for the right orbit. The mean increase in eyeball volume was 12.3%. The mean increases in the eyeball axial dimensions were 7.3%, 9.3%, and 4.4% for the X-, Y-, and Z-axes, respectively. The Wilcoxon signed rank test showed that preoperative and postoperative eyeball volumes, as well as the diameters along the X- and Y-axes, were statistically significant. Midface distraction in patients with syndromic craniostenosis results in a significant increase (P < 0.05) in the orbit and eyeball volumes. The 2 methods (haptic-aided semiautomatic segmentation and manual 3D slicer segmentation) are reproducible techniques for orbit and eyeball volume measurements.
Near roadway air pollution across a spatially extensive road and cycling network.
Farrell, William; Weichenthal, Scott; Goldberg, Mark; Valois, Marie-France; Shekarrizfard, Maryam; Hatzopoulou, Marianne
2016-05-01
This study investigates the variability in near-road concentrations of ultra-fine particles (UFP). Our results are based on a mobile data collection campaign conducted in 2012 in Montreal, Canada using instrumented bicycles and covering approximately 475 km of unique roadways. The spatial extent of the data collected included a diverse array of roads and land use patterns. Average concentrations of UFP per roadway segment varied greatly across the study area (1411-192,340 particles/cm(3)) as well as across the different visits to the same segment. Mixed effects linear regression models were estimated for UFP (R(2) = 43.80%), incorporating a wide range of predictors including land-use, built environment, road characteristics, and meteorology. Temperature and wind speed had a large negative effect on near-road concentrations of UFP. Both the day of the week and time of day had a significant effect with Tuesdays and afternoon periods positively associated with UFP. Since UFP are largely associated with traffic emissions and considering the wide spatial extent of our data collection campaign, it was impossible to collect traffic volume data. For this purpose, we used simulated data for traffic volumes and speeds across the region and observed a positive effect for volumes and negative effect for speed. Finally, proximity to truck routes was also associated with higher UFP concentrations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tang, X; Liu, H; Chen, L; Wang, Q; Luo, B; Xiang, N; He, Y; Zhu, W; Zhang, J
2018-05-24
To investigate the accuracy of two semi-automatic segmentation measurements based on magnetic resonance imaging (MRI) three-dimensional (3D) Cube fast spin echo (FSE)-flex sequence in phantoms, and to evaluate the feasibility of determining the volumetric alterations of orbital fat (OF) and total extraocular muscles (TEM) in patients with thyroid-associated ophthalmopathy (TAO) by semi-automatic segmentation. Forty-four fatty (n=22) and lean (n=22) phantoms were scanned by using Cube FSE-flex sequence with a 3 T MRI system. Their volumes were measured by manual segmentation (MS) and two semi-automatic segmentation algorithms (regional growing [RG], multi-dimensional threshold [MDT]). Pearson correlation and Bland-Altman analysis were used to evaluate the measuring accuracy of MS, RG, and MDT in phantoms as compared with the true volume. Then, OF and TEM volumes of 15 TAO patients and 15 normal controls were measured using MDT. Paired-sample t-tests were used to compare the volumes and volume ratios of different orbital tissues between TAO patients and controls. Each segmentation (MS RG, MDT) has a significant correlation (p<0.01) with true volume. There was a minimal bias for MS, and a stronger agreement between MDT and the true volume than RG and the true volume both in fatty and lean phantoms. The reproducibility of Cube FSE-flex determined MDT was adequate. The volumetric ratios of OF/globe (p<0.01), TEM/globe (p<0.01), whole orbit/globe (p<0.01) and bone orbit/globe (p<0.01) were significantly greater in TAO patients than those in healthy controls. MRI Cube FSE-flex determined MDT is a relatively accurate semi-automatic segmentation that can be used to evaluate OF and TEM volumes in clinic. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, M; Wu, G; Gao, Y
2016-06-15
Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less
Free space-planning solutions in the architecture of multi-storey buildings
NASA Astrophysics Data System (ADS)
Ibragimov, Alexander; Danilov, Alexander
2018-03-01
Here some aspects of the development of steel frame structure design from the standpoint of geometry and morphogenesis of bearing steel structures of civil engineering objects. An alternative approach to forming constructive schemes may be application of curved steel elements in the main load-bearing system. As an example, it may be circular and parabolic arches or segments of varying outline and orientation. The considered approach implies creating large internal volumes without loss in the load-bearing capacity of the frame. The basic concept makes possible a wide variety of layout and design solutions. The presence of free internal spaces of large volume in buildings of a "skyscraper" type contributes to resolving a great number of problems, including those of communicative nature.
Sugihara, Fumie; Murata, Satoru; Ueda, Tatsuo; Yasui, Daisuke; Yamaguchi, Hidenori; Miki, Izumi; Kawamoto, Chiaki; Uchida, Eiji; Kumita, Shin-Ichiro
2017-06-01
To investigate haemodynamic changes in hepatocellular carcinoma (HCC) and liver under hepatic artery occlusion. Thirty-eight HCC nodules in 25 patients were included. Computed tomography (CT) during hepatic arteriography (CTHA) with and without balloon occlusion of the hepatic artery was performed. CT attenuation and enhancement volume of HCC and liver with and without balloon occlusion were measured on CTHA. Influence of balloon position (segmental or subsegmental branch) was evaluated based on differences in HCC-to-liver attenuation ratio (H/L ratio) and enhancement volume of HCC and liver. In the segmental group (n = 20), H/L ratio and enhancement volume of HCC and liver were significantly lower with balloon occlusion than without balloon occlusion. However, in the subsegmental group (n = 18), H/L ratio was significantly higher and liver enhancement volume was significantly lower with balloon occlusion; HCC enhancement volume was similar with and without balloon occlusion. Rate of change in H/L ratio and enhancement volume of HCC and liver were lower in the segmental group than in the subsegmental group. There were significantly more perfusion defects in HCC in the segmental group. Hepatic artery occlusion causes haemodynamic changes in HCC and liver, especially with segmental occlusion. • Hepatic artery occlusion causes haemodynamic changes in hepatocellular carcinoma and liver. • Segmental occlusion decreased rate of change in hepatocellular carcinoma-to-liver attenuation ratio. • Subsegmental occlusion increased rate of change in hepatocellular carcinoma-to-liver attenuation ratio. • Hepatic artery occlusion decreased enhancement volume of hepatocellular carcinoma and liver. • Hepatic artery occlusion causes perfusion defects in hepatocellular carcinoma.
Compatible taper equation for loblolly pine
J. P. McClure; R. L. Czaplewski
1986-01-01
Cao's compatible, segmented polynomial taper equation (Q. V. Cao, H. E. Burkhart, and T. A. Max. For. Sci. 26: 71-80. 1980) is fitted to a large loblolly pine data set from the southeastern United States. Equations are presented that predict diameter at a given height, height to a given top diameter, and volume below a given position on the main stem. All...
DOT National Transportation Integrated Search
2014-08-01
This report describes the instrumentation and data acquisition for the center hung segment in the largest : truss bridge in Connecticut, located on the interstate system. The monitoring system was developed as a : joint effort between researchers at ...
Horká, Marie; Karásek, Pavel; Roth, Michal; Šlais, Karel
2017-05-01
In this work, single-piece fused silica capillaries with two different internal diameter segments featuring different inner surface roughness were prepared by new etching technology with supercritical water and used for volume coupling electrophoresis. The concept of separation and online pre-concentration of analytes in high conductivity matrix is based on the online large-volume sample pre-concentration by the combination of transient isotachophoretic stacking and sweeping of charged proteins in micellar electrokinetic chromatography using non-ionogenic surfactant. The modified surface roughness step helped to the significant narrowing of the zones of examined analytes. The sweeping and separating steps were accomplished simultaneously by the use of phosphate buffer (pH 7) containing ethanol, non-ionogenic surfactant Brij 35, and polyethylene glycol (PEG 10000) after sample injection. Sample solution of a large volume (maximum 3.7 μL) dissolved in physiological saline solution was injected into the wider end of capillary with inlet inner diameter from 150, 185 or 218 μm. The calibration plots were linear (R 2 ∼ 0.9993) over a 0.060-1 μg/mL range for the proteins used, albumin and cytochrome c. The peak area RSDs from at least 20 independent measuremens were below 3.2%. This online pre-concentration technique produced a more than 196-fold increase in sensitivity, and it can be applied for detection of, e.g. the presence of albumin in urine (0.060 μg/mL). © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Bülow, Robin; Völzke, Henry
2017-07-01
To develop the first fully automated 3D spleen segmentation framework derived from T1-weighted magnetic resonance (MR) imaging data and to verify its performance for spleen delineation and volumetry. This approach considers the issue of low contrast between spleen and adjacent tissue in non-contrast-enhanced MR images. Native T1-weighted MR volume data was performed on a 1.5 T MR system in an epidemiological study. We analyzed random subsamples of MR examinations without pathologies to develop and verify the spleen segmentation framework. The framework is modularized to include different kinds of prior knowledge into the segmentation pipeline. Classification by support vector machines differentiates between five different shape types in computed foreground probability maps and recognizes characteristic spleen regions in axial slices of MR volume data. A spleen-shape space generated by training produces subject-specific prior shape knowledge that is then incorporated into a final 3D level set segmentation method. Individually adapted shape-driven forces as well as image-driven forces resulting from refined foreground probability maps steer the level set successfully to the segment the spleen. The framework achieves promising segmentation results with mean Dice coefficients of nearly 0.91 and low volumetric mean errors of 6.3%. The presented spleen segmentation approach can delineate spleen tissue in native MR volume data. Several kinds of prior shape knowledge including subject-specific 3D prior shape knowledge can be used to guide segmentation processes achieving promising results.
Bayesian automated cortical segmentation for neonatal MRI
NASA Astrophysics Data System (ADS)
Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha
2017-11-01
Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.
3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities
NASA Astrophysics Data System (ADS)
Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir
2016-03-01
Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.
The Segmental Morphometric Properties of the Horse Cervical Spinal Cord: A Study of Cadaver
Bahar, Sadullah; Bolat, Durmus; Selcuk, Muhammet Lutfi
2013-01-01
Although the cervical spinal cord (CSC) of the horse has particular importance in diseases of CNS, there is very little information about its segmental morphometry. The objective of the present study was to determine the morphometric features of the CSC segments in the horse and possible relationships among the morphometric features. The segmented CSC from five mature animals was used. Length, weight, diameter, and volume measurements of the segments were performed macroscopically. Lengths and diameters of segments were measured histologically, and area and volume measurements were performed using stereological methods. The length, weight, and volume of the CSC were 61.6 ± 3.2 cm, 107.2 ± 10.4 g, and 95.5 ± 8.3 cm3, respectively. The length of the segments was increased from C 1 to C 3, while it decreased from C 3 to C 8. The gross section (GS), white matter (WM), grey matter (GM), dorsal horn (DH), and ventral horn (VH) had the largest cross-section areas at C 8. The highest volume was found for the total segment and WM at C 4, GM, DH, and VH at C 7, and the central canal (CC) at C 3. The data obtained not only contribute to the knowledge of the normal anatomy of the CSC but may also provide reference data for veterinary pathologists and clinicians. PMID:23476145
Bidirectional segmentation of prostate capsule from ultrasound volumes: an improved strategy
NASA Astrophysics Data System (ADS)
Wei, Liyang; Narayanan, Ramkrishnan; Kumar, Dinesh; Fenster, Aaron; Barqawi, Albaha; Werahera, Priya; Crawford, E. David; Suri, Jasjit S.
2008-03-01
Prostate volume is an indirect indicator for several prostate diseases. Volume estimation is a desired requirement during prostate biopsy, therapy and clinical follow up. Image segmentation is thus necessary. Previously, discrete dynamic contour (DDC) was implemented in orthogonal unidirectional on the slice-by-slice basis for prostate boundary estimation. This suffered from the glitch that it needed stopping criteria during the propagation of segmentation procedure from slice-to-slice. To overcome this glitch, axial DDC was implemented and this suffered from the fact that central axis never remains fixed and wobbles during propagation of segmentation from slice-to-slice. The effect of this was a multi-fold reconstructed surface. This paper presents a bidirectional DDC approach, thereby removing the two glitches. Our bidirectional DDC protocol was tested on a clinical dataset on 28 3-D ultrasound image volumes acquired using side fire Philips transrectal ultrasound. We demonstrate the orthogonal bidirectional DDC strategy achieved the most accurate volume estimation compared with previously published orthogonal unidirectional DDC and axial DDC methods. Compared to the ground truth, we show that the mean volume estimation errors were: 18.48%, 9.21% and 7.82% for unidirectional, axial and bidirectional DDC methods, respectively. The segmentation architecture is implemented in Visual C++ in Windows environment.
Segmentation propagation for the automated quantification of ventricle volume from serial MRI
NASA Astrophysics Data System (ADS)
Linguraru, Marius George; Butman, John A.
2009-02-01
Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.
Fully automatic segmentation of white matter hyperintensities in MR images of the elderly.
Admiraal-Behloul, F; van den Heuvel, D M J; Olofsen, H; van Osch, M J P; van der Grond, J; van Buchem, M A; Reiber, J H C
2005-11-15
The role of quantitative image analysis in large clinical trials is continuously increasing. Several methods are available for performing white matter hyperintensity (WMH) volume quantification. They vary in the amount of the human interaction involved. In this paper, we describe a fully automatic segmentation that was used to quantify WMHs in a large clinical trial on elderly subjects. Our segmentation method combines information from 3 different MR images: proton density (PD), T2-weighted and fluid-attenuated inversion recovery (FLAIR) images; our method uses an established artificial intelligent technique (fuzzy inference system) and does not require extensive computations. The reproducibility of the segmentation was evaluated in 9 patients who underwent scan-rescan with repositioning; an inter-class correlation coefficient (ICC) of 0.91 was obtained. The effect of differences in image resolution was tested in 44 patients, scanned with 6- and 3-mm slice thickness FLAIR images; we obtained an ICC value of 0.99. The accuracy of the segmentation was evaluated on 100 patients for whom manual delineation of WMHs was available; the obtained ICC was 0.98 and the similarity index was 0.75. Besides the fact that the approach demonstrated very high volumetric and spatial agreement with expert delineation, the software did not require more than 2 min per patient (from loading the images to saving the results) on a Pentium-4 processor (512 MB RAM).
Dreizin, David; Bodanapally, Uttam K; Neerchal, Nagaraj; Tirada, Nikki; Patlas, Michael; Herskovits, Edward
2016-11-01
Manually segmented traumatic pelvic hematoma volumes are strongly predictive of active bleeding at conventional angiography, but the method is time intensive, limiting its clinical applicability. We compared volumetric analysis using semi-automated region growing segmentation to manual segmentation and diameter-based size estimates in patients with pelvic hematomas after blunt pelvic trauma. A 14-patient cohort was selected in an anonymous randomized fashion from a dataset of patients with pelvic binders at MDCT, collected retrospectively as part of a HIPAA-compliant IRB-approved study from January 2008 to December 2013. To evaluate intermethod differences, one reader (R1) performed three volume measurements using the manual technique and three volume measurements using the semi-automated technique. To evaluate interobserver differences for semi-automated segmentation, a second reader (R2) performed three semi-automated measurements. One-way analysis of variance was used to compare differences in mean volumes. Time effort was also compared. Correlation between the two methods as well as two shorthand appraisals (greatest diameter, and the ABC/2 method for estimating ellipsoid volumes) was assessed with Spearman's rho (r). Intraobserver variability was lower for semi-automated compared to manual segmentation, with standard deviations ranging between ±5-32 mL and ±17-84 mL, respectively (p = 0.0003). There was no significant difference in mean volumes between the two readers' semi-automated measurements (p = 0.83); however, means were lower for the semi-automated compared with the manual technique (manual: mean and SD 309.6 ± 139 mL; R1 semi-auto: 229.6 ± 88.2 mL, p = 0.004; R2 semi-auto: 243.79 ± 99.7 mL, p = 0.021). Despite differences in means, the correlation between the two methods was very strong and highly significant (r = 0.91, p < 0.001). Correlations with diameter-based methods were only moderate and nonsignificant. Mean semi-automated segmentation time effort was 2 min and 6 s and 2 min and 35 s for R1 and R2, respectively, vs. 22 min and 8 s for manual segmentation. Semi-automated pelvic hematoma volumes correlate strongly with manually segmented volumes. Since semi-automated segmentation can be performed reliably and efficiently, volumetric analysis of traumatic pelvic hematomas is potentially valuable at the point-of-care.
NASA Astrophysics Data System (ADS)
Tang, Xiaoying; Kutten, Kwame; Ceritoglu, Can; Mori, Susumu; Miller, Michael I.
2015-03-01
In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels - the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term "fast MALF". The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.
A fully automated system for quantification of background parenchymal enhancement in breast DCE-MRI
NASA Astrophysics Data System (ADS)
Ufuk Dalmiş, Mehmet; Gubern-Mérida, Albert; Borelli, Cristina; Vreemann, Suzan; Mann, Ritse M.; Karssemeijer, Nico
2016-03-01
Background parenchymal enhancement (BPE) observed in breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has been identified as an important biomarker associated with risk for developing breast cancer. In this study, we present a fully automated framework for quantification of BPE. We initially segmented fibroglandular tissue (FGT) of the breasts using an improved version of an existing method. Subsequently, we computed BPEabs (volume of the enhancing tissue), BPErf (BPEabs divided by FGT volume) and BPErb (BPEabs divided by breast volume), using different relative enhancement threshold values between 1% and 100%. To evaluate and compare the previous and improved FGT segmentation methods, we used 20 breast DCE-MRI scans and we computed Dice similarity coefficient (DSC) values with respect to manual segmentations. For evaluation of the BPE quantification, we used a dataset of 95 breast DCE-MRI scans. Two radiologists, in individual reading sessions, visually analyzed the dataset and categorized each breast into minimal, mild, moderate and marked BPE. To measure the correlation between automated BPE values to the radiologists' assessments, we converted these values into ordinal categories and we used Spearman's rho as a measure of correlation. According to our results, the new segmentation method obtained an average DSC of 0.81 0.09, which was significantly higher (p<0.001) compared to the previous method (0.76 0.10). The highest correlation values between automated BPE categories and radiologists' assessments were obtained with the BPErf measurement (r=0.55, r=0.49, p<0.001 for both), while the correlation between the scores given by the two radiologists was 0.82 (p<0.001). The presented framework can be used to systematically investigate the correlation between BPE and risk in large screening cohorts.
Multiclassifier fusion in human brain MR segmentation: modelling convergence.
Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander
2006-01-01
Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.
Aptel, Florent; Beccat, Sylvain; Fortoul, Vincent; Denis, Philippe
2011-08-01
To compare anterior chamber volume (ACV), iris volume, and iridolenticular contact (ILC) area before and after laser peripheral iridotomy (LPI) in eyes with pigment dispersion syndrome (PDS) using anterior segment optical coherence tomography (AS OCT) and image processing software. Cross-sectional study. Eighteen eyes of 18 patients with PDS; 30 eyes of 30 controls matched for age, gender, and refraction. Anterior segment OCT imaging was performed in all eyes before LPI and 1, 4, and 12 weeks after LPI. At each visit, 12 cross-sectional images of the AS were taken: 4 in bright conditions with accommodation (accommodation), 4 in bright conditions without accommodation (physiological miosis), and 4 under dark conditions (physiologic mydriasis). Biometric parameters were estimated using AS OCT radial sections and customized image-processing software. Anterior chamber volume, iris volume-to-length ratio, ILC area, AS OCT anterior chamber depth, and A-scan ultrasonography axial length. Before LPI, PDS eyes had a significantly greater ACV and ILC area than control eyes (P<0.01) and a significantly smaller iris volume-to-length ratio than the controls (P<0.05). After LPI, ACV and ILC area decreased significantly in PDS eyes, but iris volume-to-length ratio increased significantly (P<0.02) and was not significantly different from that of controls. These biometric changes were stable over time. Iris volume-to-length ratio decreased significantly from accommodation to mydriasis and from miosis to mydriasis, both in PDS and control eyes (P<0.01). In PDS eyes, ILC area decreased significantly from accommodation to mydriasis, both before and after LPI (P<0.01). On multivariate analysis, greater anterior chamber (AC) volume (P<0.02) and larger AC depth (P<0.05) before LPI were significant predictors of a larger ILC area. Pigment dispersion syndrome eyes do not have an iris that is abnormally large, relative to the AS size, but have a weakly resistant iris that is stretched and pushed against the lens when there is a pressure difference across the iris. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Boundary fitting based segmentation of fluorescence microscopy images
NASA Astrophysics Data System (ADS)
Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2015-03-01
Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.
On the unsupervised analysis of domain-specific Chinese texts
Deng, Ke; Bol, Peter K.; Li, Kate J.; Liu, Jun S.
2016-01-01
With the growing availability of digitized text data both publicly and privately, there is a great need for effective computational tools to automatically extract information from texts. Because the Chinese language differs most significantly from alphabet-based languages in not specifying word boundaries, most existing Chinese text-mining methods require a prespecified vocabulary and/or a large relevant training corpus, which may not be available in some applications. We introduce an unsupervised method, top-down word discovery and segmentation (TopWORDS), for simultaneously discovering and segmenting words and phrases from large volumes of unstructured Chinese texts, and propose ways to order discovered words and conduct higher-level context analyses. TopWORDS is particularly useful for mining online and domain-specific texts where the underlying vocabulary is unknown or the texts of interest differ significantly from available training corpora. When outputs from TopWORDS are fed into context analysis tools such as topic modeling, word embedding, and association pattern finding, the results are as good as or better than that from using outputs of a supervised segmentation method. PMID:27185919
NASA Astrophysics Data System (ADS)
Lemieux, Louis
2001-07-01
A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.
Brenner, L; Marhofer, P; Kettner, S C; Willschke, H; Machata, A-M; Al-Zoraigi, U; Lundblad, M; Lönnqvist, P A
2011-08-01
Despite the large amount of literature on caudal anaesthesia in children, the issue of volume of local anaesthetics and cranial spread is still not settled. Thus, the aim of the present prospective randomized study was to evaluate the cranial spread of caudally administered local anaesthetics in children by means of real-time ultrasound, with a special focus on the effects of using different volumes of local anaesthetics. Seventy-five children, 1 month to 6 yr, undergoing inguinal hernia repair or more distal surgery were randomized to receive a caudal block with 0.7, 1.0, or 1.3 ml kg(-1) ropivacaine. The cranial spread of the local anaesthetic within the spinal canal was assessed by real-time ultrasound scanning; the absolute cranial segmental level and the cranial level relative to the conus medullaris were determined. All the blocks were judged to be clinically successful. A significant correlation was found between the injected volume and the cranial level reached by the local anaesthetic both with regards to the absolute cranial segmental level and the cranial level relative to the conus medullaris. The main finding of the present study was positive, but numerically small correlation between injected volumes of local anaesthetic and the cranial spread of caudally administered local anaesthetics. Therefore, the prediction of the cranial spread of local anaesthetic, depending on the injected volume of the local anaesthetic, was not possible. EudraCT Number: 2008-007627-40.
Kloth, C; Thaiss, W M; Hetzel, J; Ditt, H; Grosse, U; Nikolaou, K; Horger, M
2016-07-01
To assess the impact of endobronchial coiling on the segment bronchus cross-sectional area and volumes in patients with lung emphysema using quantitative chest-CT measurements. Thirty patients (female = 15; median age = 65.36 years) received chest-CT before and after endobronchial coiling for lung volume reduction (LVR) between January 2010 and December 2014. Thin-slice (0.6 mm) non-enhanced image data sets were acquired both at end-inspiration and end-expiration using helical technique and 120 kV/100-150 mAs. Clinical response was defined as an increase in the walking distance (Six-minute walk test; 6MWT) after LVR-therapy. Additionally, pulmonary function test (PFT) measurements were used for clinical correlation. In the treated segmental bronchia, the cross-sectional lumen area showed significant reduction (p < 0.05) in inspiration and tendency towards enlargement in expiration (p > 0.05). In the ipsilateral lobes, the lumina showed no significant changes. In the contralateral lung, we found tendency towards increased cross-sectional area in inspiration (p = 0.06). Volumes of the treated segments correlated with the treated segmental bronchial lumina in expiration (r = 0.80, p < 0.001). Clinical correlation with changes in 6MWT/PFT showed a significant decrease of the inspiratory volume of the treated lobe in responders only. Endobronchial coiling causes significant decrease in the cross-sectional area of treated segment bronchi in inspiration and a slight increase in expiration accompanied by a volume reduction. • Endobronchial coiling has indirect impact on cross-sectional area of treated segment bronchi • Volume changes of treated lobes correlate with changes in bronchial cross-sectional area • Coil-induced effects reflect their stabilizing and stiffening impact on lung parenchyma • Endobronchial coiling reduces bronchial collapsing compensating the loss of elasticity.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation
NASA Astrophysics Data System (ADS)
Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.
2012-07-01
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.
2018-02-01
Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.
Automatic delineation of functional lung volumes with 68Ga-ventilation/perfusion PET/CT.
Le Roux, Pierre-Yves; Siva, Shankar; Callahan, Jason; Claudic, Yannis; Bourhis, David; Steinfort, Daniel P; Hicks, Rodney J; Hofman, Michael S
2017-10-10
Functional volumes computed from 68 Ga-ventilation/perfusion (V/Q) PET/CT, which we have shown to correlate with pulmonary function test parameters (PFTs), have potential diagnostic utility in a variety of clinical applications, including radiotherapy planning. An automatic segmentation method would facilitate delineation of such volumes. The aim of this study was to develop an automated threshold-based approach to delineate functional volumes that best correlates with manual delineation. Thirty lung cancer patients undergoing both V/Q PET/CT and PFTs were analyzed. Images were acquired following inhalation of Galligas and, subsequently, intravenous administration of 68 Ga-macroaggreted-albumin (MAA). Using visually defined manual contours as the reference standard, various cutoff values, expressed as a percentage of the maximal pixel value, were applied. The average volume difference and Dice similarity coefficient (DSC) were calculated, measuring the similarity of the automatic segmentation and the reference standard. Pearson's correlation was also calculated to compare automated volumes with manual volumes, and automated volumes optimized to PFT indices. For ventilation volumes, mean volume difference was lowest (- 0.4%) using a 15%max threshold with Pearson's coefficient of 0.71. Applying this cutoff, median DSC was 0.93 (0.87-0.95). Nevertheless, limits of agreement in volume differences were large (- 31.0 and 30.2%) with differences ranging from - 40.4 to + 33.0%. For perfusion volumes, mean volume difference was lowest and Pearson's coefficient was highest using a 15%max threshold (3.3% and 0.81, respectively). Applying this cutoff, median DSC was 0.93 (0.88-0.93). Nevertheless, limits of agreement were again large (- 21.1 and 27.8%) with volume differences ranging from - 18.6 to + 35.5%. Using the 15%max threshold, moderate correlation was demonstrated with FEV1/FVC (r = 0.48 and r = 0.46 for ventilation and perfusion images, respectively). No correlation was found between other PFT indices. To automatically delineate functional volumes with 68 Ga-V/Q PET/CT, the most appropriate cutoff was 15%max for both ventilation and perfusion images. However, using this unique threshold systematically provided unacceptable variability compared to the reference volume and relatively poor correlation with PFT parameters. Accordingly, a visually adapted semi-automatic method is favored, enabling rapid and quantitative delineation of lung functional volumes with 68 Ga-V/Q PET/CT.
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
NASA Astrophysics Data System (ADS)
Anastasieva, E. A.; Voropaeva, A. A.; Sadovoy, M. A.; Kirilova, I. A.
2017-09-01
The problem of large bone defects replacement, formed after segmental bone resections, remains an actual issue of modern orthopedics. It is known that the autograft is the most acceptable material for the replacement of bone tissue; however, due to its small volume and physical properties, it has limited usage. Our goal is to analyze the results of the experiments and studies on replacement of large bone defects after resection of the bone tumor. The problem is justified by the complicated osteoconduction and osteointegration; because it is proved that the reconstruction of the microcirculatory bloodstream is difficult in the presence of damage more than 4 cm2. It was revealed that using of allograft in combination with additional components is comparable in effectiveness, including long-term period, with autograft usage. It is promising to combine plastic allogenous material, capable of reconstructing defects of various configuration intraoperatively, with the necessary chemotherapy with controlled desorption to maintain effective concentration of drug.
Fast and robust segmentation of the striatum using deep convolutional neural networks.
Choi, Hongyoon; Jin, Kyong Hwan
2016-12-01
Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S
2018-02-01
Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
Connection method of separated luminal regions of intestine from CT volumes
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Hirooka, Yoshiki; Goto, Hidemi; Mori, Kensaku
2015-03-01
This paper proposes a connection method of separated luminal regions of the intestine for Crohn's disease diagnosis. Crohn's disease is an inflammatory disease of the digestive tract. Capsule or conventional endoscopic diagnosis is performed for Crohn's disease diagnosis. However, parts of the intestines may not be observed in the endoscopic diagnosis if intestinal stenosis occurs. Endoscopes cannot pass through the stenosed parts. CT image-based diagnosis is developed as an alternative choice of the Crohn's disease. CT image-based diagnosis enables physicians to observe the entire intestines even if stenosed parts exist. CAD systems for Crohn's disease using CT volumes are recently developed. Such CAD systems need to reconstruct separated luminal regions of the intestines to analyze intestines. We propose a connection method of separated luminal regions of the intestines segmented from CT volumes. The luminal regions of the intestines are segmented from a CT volume. The centerlines of the luminal regions are calculated by using a thinning process. We enumerate all the possible sequences of the centerline segments. In this work, we newly introduce a condition using distance between connected ends points of the centerline segments. This condition eliminates unnatural connections of the centerline segments. Also, this condition reduces processing time. After generating a sequence list of the centerline segments, the correct sequence is obtained by using an evaluation function. We connect the luminal regions based on the correct sequence. Our experiments using four CT volumes showed that our method connected 6.5 out of 8.0 centerline segments per case. Processing times of the proposed method were reduced from the previous method.
Im, Hyung-Jun; Bradshaw, Tyler; Solaiyappan, Meiyappan; Cho, Steve Y
2018-02-01
Numerous methods to segment tumors using 18 F-fluorodeoxyglucose positron emission tomography (FDG PET) have been introduced. Metabolic tumor volume (MTV) refers to the metabolically active volume of the tumor segmented using FDG PET, and has been shown to be useful in predicting patient outcome and in assessing treatment response. Also, tumor segmentation using FDG PET has useful applications in radiotherapy treatment planning. Despite extensive research on MTV showing promising results, MTV is not used in standard clinical practice yet, mainly because there is no consensus on the optimal method to segment tumors in FDG PET images. In this review, we discuss currently available methods to measure MTV using FDG PET, and assess the advantages and disadvantages of the methods.
Family history density of alcoholism relates to left nucleus accumbens volume in adolescent girls.
Cservenka, Anita; Gillespie, Alicia J; Michael, Paul G; Nagel, Bonnie J
2015-01-01
A family history of alcoholism is a significant risk factor for the development of alcohol use disorders (AUDs). Because common structural abnormalities are present in reward and affective brain regions in alcoholics and those with familial alcoholism, the current study examined the relationship between familial loading of AUDs and volumes of the amygdala and nucleus accumbens (NAcc) in largely alcohol-naive adolescents, ages 12-16 years (N = 140). The amygdala and NAcc were delineated on each participant's T1-weighted anatomical scan, using FMRIB Software Library's FMRIB Integrated Registration & Segmentation Tool, and visually inspected for accuracy and volume outliers. In the 140 participants with accurate segmentation (75 male/65 female), subcortical volumes were represented as a ratio to intracranial volume (ICV). A family history density (FHD) score was calculated for each adolescent based on the presence of AUDs in first- and second-degree relatives (range: 0.03-1.50; higher scores represent a greater prevalence of familial AUDs). Multiple regressions, with age and sex controlled for, examined the association between FHD and left and right amygdala and NAcc volume/ICV. There was a significant positive relationship between FHD and left NAcc volume/ICV (ΔR² = .04, p = .02). Post hoc regressions indicated that this effect was only significant in females (ΔR² = .11, p = .006). This finding suggests that the degree of familial alcoholism, genetic or otherwise, is associated with alterations in reward-related brain structure. Further work will be necessary to examine whether FHD is related to future alcohol-related problems and reward-related behaviors.
NASA Astrophysics Data System (ADS)
Lee, Han Sang; Kim, Hyeun A.; Kim, Hyeonjin; Hong, Helen; Yoon, Young Cheol; Kim, Junmo
2016-03-01
In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.
CT volumetry of the skeletal tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brindle, James M.; Alexandre Trindade, A.; Pichardo, Jose C.
2006-10-15
Computed tomography (CT) is an important and widely used modality in the diagnosis and treatment of various cancers. In the field of molecular radiotherapy, the use of spongiosa volume (combined tissues of the bone marrow and bone trabeculae) has been suggested as a means to improve the patient-specificity of bone marrow dose estimates. The noninvasive estimation of an organ volume comes with some degree of error or variation from the true organ volume. The present study explores the ability to obtain estimates of spongiosa volume or its surrogate via manual image segmentation. The variation among different segmentation raters was exploredmore » and found not to be statistically significant (p value >0.05). Accuracy was assessed by having several raters manually segment a polyvinyl chloride (PVC) pipe with known volumes. Segmentation of the outer region of the PVC pipe resulted in mean percent errors as great as 15% while segmentation of the pipe's inner region resulted in mean percent errors within {approx}5%. Differences between volumes estimated with the high-resolution CT data set (typical of ex vivo skeletal scans) and the low-resolution CT data set (typical of in vivo skeletal scans) were also explored using both patient CT images and a PVC pipe phantom. While a statistically significant difference (p value <0.002) between the high-resolution and low-resolution data sets was observed with excised femoral heads obtained following total hip arthroplasty, the mean difference between high-resolution and low-resolution data sets was found to be only 1.24 and 2.18 cm{sup 3} for spongiosa and cortical bone, respectively. With respect to differences observed with the PVC pipe, the variation between the high-resolution and low-resolution mean percent errors was a high as {approx}20% for the outer region volume estimates and only as high as {approx}6% for the inner region volume estimates. The findings from this study suggest that manual segmentation is a reasonably accurate and reliable means for the in vivo estimation of spongiosa volume. This work also provides a foundation for future studies where spongiosa volumes are estimated by various raters in more comprehensive CT data sets.« less
Xie, Long; Wisse, Laura E M; Das, Sandhitsu R; Wang, Hongzhi; Wolk, David A; Manjón, Jose V; Yushkevich, Paul A
2016-10-01
Quantification of medial temporal lobe (MTL) cortices, including entorhinal cortex (ERC) and perirhinal cortex (PRC), from in vivo MRI is desirable for studying the human memory system as well as in early diagnosis and monitoring of Alzheimer's disease. However, ERC and PRC are commonly over-segmented in T1-weighted (T1w) MRI because of the adjacent meninges that have similar intensity to gray matter in T1 contrast. This introduces errors in the quantification and could potentially confound imaging studies of ERC/PRC. In this paper, we propose to segment MTL cortices along with the adjacent meninges in T1w MRI using an established multi-atlas segmentation framework together with super-resolution technique. Experimental results comparing the proposed pipeline with existing pipelines support the notion that a large portion of meninges is segmented as gray matter by existing algorithms but not by our algorithm. Cross-validation experiments demonstrate promising segmentation accuracy. Further, agreement between the volume and thickness measures from the proposed pipeline and those from the manual segmentations increase dramatically as a result of accounting for the confound of meninges. Evaluated in the context of group discrimination between patients with amnestic mild cognitive impairment and normal controls, the proposed pipeline generates more biologically plausible results and improves the statistical power in discriminating groups in absolute terms comparing to other techniques using T1w MRI. Although the performance of the proposed pipeline is inferior to that using T2-weighted MRI, which is optimized to image MTL sub-structures, the proposed pipeline could still provide important utilities in analyzing many existing large datasets that only have T1w MRI available.
Burton, Rebecca A.B.; Lee, Peter; Casero, Ramón; Garny, Alan; Siedlecka, Urszula; Schneider, Jürgen E.; Kohl, Peter; Grau, Vicente
2014-01-01
Aims Cardiac histo-anatomical organization is a major determinant of function. Changes in tissue structure are a relevant factor in normal and disease development, and form targets of therapeutic interventions. The purpose of this study was to test tools aimed to allow quantitative assessment of cell-type distribution from large histology and magnetic resonance imaging- (MRI) based datasets. Methods and results Rabbit heart fixation during cardioplegic arrest and MRI were followed by serial sectioning of the whole heart and light-microscopic imaging of trichrome-stained tissue. Segmentation techniques developed specifically for this project were applied to segment myocardial tissue in the MRI and histology datasets. In addition, histology slices were segmented into myocytes, connective tissue, and undefined. A bounding surface, containing the whole heart, was established for both MRI and histology. Volumes contained in the bounding surface (called ‘anatomical volume’), as well as that identified as containing any of the above tissue categories (called ‘morphological volume’), were calculated. The anatomical volume was 7.8 cm3 in MRI, and this reduced to 4.9 cm3 after histological processing, representing an ‘anatomical’ shrinkage by 37.2%. The morphological volume decreased by 48% between MRI and histology, highlighting the presence of additional tissue-level shrinkage (e.g. an increase in interstitial cleft space). The ratio of pixels classified as containing myocytes to pixels identified as non-myocytes was roughly 6:1 (61.6 vs. 9.8%; the remaining fraction of 28.6% was ‘undefined’). Conclusion Qualitative and quantitative differentiation between myocytes and connective tissue, using state-of-the-art high-resolution serial histology techniques, allows identification of cell-type distribution in whole-heart datasets. Comparison with MRI illustrates a pronounced reduction in anatomical and morphological volumes during histology processing. PMID:25362175
Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T
2015-06-01
To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.
Mining volume measurement system
NASA Technical Reports Server (NTRS)
Heyman, Joseph Saul (Inventor)
1988-01-01
In a shaft with a curved or straight primary segment and smaller off-shooting segments, at least one standing wave is generated in the primary segment. The shaft has either an open end or a closed end and approximates a cylindrical waveguide. A frequency of a standing wave that represents the fundamental mode characteristic of the primary segment can be measured. Alternatively, a frequency differential between two successive harmonic modes that are characteristic of the primary segment can be measured. In either event, the measured frequency or frequency differential is characteristic of the length and thus the volume of the shaft based on length times the bore area.
Madder, Ryan D; VanOosterhout, Stacie; Klungle, David; Mulder, Abbey; Elmore, Matthew; Decker, Jeffrey M; Langholz, David; Boyden, Thomas F; Parker, Jessica; Muller, James E
2017-10-01
This study sought to determine the frequency of large lipid-rich plaques (LRP) in the coronary arteries of individuals with high coronary artery calcium scores (CACS) and to determine whether the CACS correlates with coronary lipid burden. Combined near-infrared spectroscopy and intravascular ultrasound was performed in 57 vessels in 20 asymptomatic individuals (90% on statins) with no prior history of coronary artery disease who had a screening CACS ≥300 Agatston units. Among 268 10-mm coronary segments, near-infrared spectroscopy images were analyzed for LRP, defined as a bright yellow block on the near-infrared spectroscopy block chemogram. Lipid burden was assessed as the lipid core burden index (LCBI), and large LRP were defined as a maximum LCBI in 4 mm ≥400. Vessel plaque volume was measured by quantitative intravascular ultrasound. Vessel-level CACS significantly correlated with plaque volume by intravascular ultrasound ( r =0.69; P <0.0001) but not with LCBI by near-infrared spectroscopy ( r =0.24; P =0.07). Despite a high CACS, no LRP was detected in 8 (40.0%) subjects. Large LRP having a maximum LCBI in 4 mm ≥400 were infrequent, found in only 5 (25.0%) of 20 subjects and in only 5 (1.9%) of 268 10-mm coronary segments analyzed. Among individuals with a CACS ≥300 Agatston units mostly on statins, CACS correlated with total plaque volume but not LCBI. This observation may have implications on coronary risk among individuals with a high CACS considering that it is coronary LRP, rather than calcification, that underlies the majority of acute coronary events. © 2017 American Heart Association, Inc.
Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda
2014-09-01
Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion estimation is feasible, and provides a promising tool for decoupling perfusion and tissue volume. Copyright © 2014 John Wiley & Sons, Ltd.
Le Troter, Arnaud; Fouré, Alexandre; Guye, Maxime; Confort-Gouny, Sylviane; Mattei, Jean-Pierre; Gondin, Julien; Salort-Campana, Emmanuelle; Bendahan, David
2016-04-01
Atlas-based segmentation is a powerful method for automatic structural segmentation of several sub-structures in many organs. However, such an approach has been very scarcely used in the context of muscle segmentation, and so far no study has assessed such a method for the automatic delineation of individual muscles of the quadriceps femoris (QF). In the present study, we have evaluated a fully automated multi-atlas method and a semi-automated single-atlas method for the segmentation and volume quantification of the four muscles of the QF and for the QF as a whole. The study was conducted in 32 young healthy males, using high-resolution magnetic resonance images (MRI) of the thigh. The multi-atlas-based segmentation method was conducted in 25 subjects. Different non-linear registration approaches based on free-form deformable (FFD) and symmetric diffeomorphic normalization algorithms (SyN) were assessed. Optimal parameters of two fusion methods, i.e., STAPLE and STEPS, were determined on the basis of the highest Dice similarity index (DSI) considering manual segmentation (MSeg) as the ground truth. Validation and reproducibility of this pipeline were determined using another MRI dataset recorded in seven healthy male subjects on the basis of additional metrics such as the muscle volume similarity values, intraclass coefficient, and coefficient of variation. Both non-linear registration methods (FFD and SyN) were also evaluated as part of a single-atlas strategy in order to assess longitudinal muscle volume measurements. The multi- and the single-atlas approaches were compared for the segmentation and the volume quantification of the four muscles of the QF and for the QF as a whole. Considering each muscle of the QF, the DSI of the multi-atlas-based approach was high 0.87 ± 0.11 and the best results were obtained with the combination of two deformation fields resulting from the SyN registration method and the STEPS fusion algorithm. The optimal variables for FFD and SyN registration methods were four templates and a kernel standard deviation ranging between 5 and 8. The segmentation process using a single-atlas-based method was more robust with DSI values higher than 0.9. From the vantage of muscle volume measurements, the multi-atlas-based strategy provided acceptable results regarding the QF muscle as a whole but highly variable results regarding individual muscle. On the contrary, the performance of the single-atlas-based pipeline for individual muscles was highly comparable to the MSeg, thereby indicating that this method would be adequate for longitudinal tracking of muscle volume changes in healthy subjects. In the present study, we demonstrated that both multi-atlas and single-atlas approaches were relevant for the segmentation of individual muscles of the QF in healthy subjects. Considering muscle volume measurements, the single-atlas method provided promising perspectives regarding longitudinal quantification of individual muscle volumes.
Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe
2011-03-01
This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.
Fetal brain volumetry through MRI volumetric reconstruction and segmentation
Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.
2013-01-01
Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848
NASA Technical Reports Server (NTRS)
Ho, Evelyn L.; Schweiss, Robert J.
2008-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Science Data Segment (SDS) will make daily data requests for approximately six terabytes of NPP science products for each of its six environmental assessment elements from the operational data providers. As a result, issues associated with duplicate data requests, data transfers of large volumes of diverse products, and data transfer failures raised concerns with respect to the network traffic and bandwidth consumption. The NPP SDS Data Depository and Distribution Element (SD3E) was developed to provide a mechanism for efficient data exchange, alleviate duplicate network traffic, and reduce operational costs.
Preliminary design approach for large high precision segmented reflectors
NASA Technical Reports Server (NTRS)
Mikulas, Martin M., Jr.; Collins, Timothy J.; Hedgepeth, John M.
1990-01-01
A simplified preliminary design capability for erectable precision segmented reflectors is presented. This design capability permits a rapid assessment of a wide range of reflector parameters as well as new structural concepts and materials. The preliminary design approach was applied to a range of precision reflectors from 10 meters to 100 meters in diameter while considering standard design drivers. The design drivers considered were: weight, fundamental frequency, launch packaging volume, part count, and on-orbit assembly time. For the range of parameters considered, on-orbit assembly time was identified as the major design driver. A family of modular panels is introduced which can significantly reduce the number of reflector parts and the on-orbit assembly time.
GPU-based relative fuzzy connectedness image segmentation.
Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W
2013-01-01
Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.
GPU-based relative fuzzy connectedness image segmentation
Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.
2013-01-01
Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094
Antony, Bhavna Josephine; Kim, Byung-Jin; Lang, Andrew; Carass, Aaron; Prince, Jerry L; Zack, Donald J
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study.
Lang, Andrew; Carass, Aaron; Prince, Jerry L.; Zack, Donald J.
2017-01-01
The use of spectral-domain optical coherence tomography (SD-OCT) is becoming commonplace for the in vivo longitudinal study of murine models of ophthalmic disease. Longitudinal studies, however, generate large quantities of data, the manual analysis of which is very challenging due to the time-consuming nature of generating delineations. Thus, it is of importance that automated algorithms be developed to facilitate accurate and timely analysis of these large datasets. Furthermore, as the models target a variety of diseases, the associated structural changes can also be extremely disparate. For instance, in the light damage (LD) model, which is frequently used to study photoreceptor degeneration, the outer retina appears dramatically different from the normal retina. To address these concerns, we have developed a flexible graph-based algorithm for the automated segmentation of mouse OCT volumes (ASiMOV). This approach incorporates a machine-learning component that can be easily trained for different disease models. To validate ASiMOV, the automated results were compared to manual delineations obtained from three raters on healthy and BALB/cJ mice post LD. It was also used to study a longitudinal LD model, where five control and five LD mice were imaged at four timepoints post LD. The total retinal thickness and the outer retina (comprising the outer nuclear layer, and inner and outer segments of the photoreceptors) were unchanged the day after the LD, but subsequently thinned significantly (p < 0.01). The retinal nerve fiber-ganglion cell complex and the inner plexiform layers, however, remained unchanged for the duration of the study. PMID:28817571
Beaton, L.; Mazzaferri, J.; Lalonde, F.; Hidalgo-Aguirre, M.; Descovich, D.; Lesk, M. R.; Costantino, S.
2015-01-01
We have developed a novel optical approach to determine pulsatile ocular volume changes using automated segmentation of the choroid, which, together with Dynamic Contour Tonometry (DCT) measurements of intraocular pressure (IOP), allows estimation of the ocular rigidity (OR) coefficient. Spectral Domain Optical Coherence Tomography (OCT) videos were acquired with Enhanced Depth Imaging (EDI) at 7Hz during ~50 seconds at the fundus. A novel segmentation algorithm based on graph search with an edge-probability weighting scheme was developed to measure choroidal thickness (CT) at each frame. Global ocular volume fluctuations were derived from frame-to-frame CT variations using an approximate eye model. Immediately after imaging, IOP and ocular pulse amplitude (OPA) were measured using DCT. OR was calculated from these peak pressure and volume changes. Our automated segmentation algorithm provides the first non-invasive method for determining ocular volume change due to pulsatile choroidal filling, and the estimation of the OR constant. Future applications of this method offer an important avenue to understanding the biomechanical basis of ocular pathophysiology. PMID:26137373
Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.
2016-01-01
Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584
Quantification of osteolytic bone lesions in a preclinical rat trial
NASA Astrophysics Data System (ADS)
Fränzle, Andrea; Bretschi, Maren; Bäuerle, Tobias; Giske, Kristina; Hillengass, Jens; Bendl, Rolf
2013-10-01
In breast cancer, most of the patients who died, have developed bone metastasis as disease progression. Bone metastases in case of breast cancer are mainly bone destructive (osteolytic). To understand pathogenesis and to analyse response to different treatments, animal models, in our case rats, are examined. For assessment of treatment response to bone remodelling therapies exact segmentations of osteolytic lesions are needed. Manual segmentations are not only time-consuming but lack in reproducibility. Computerized segmentation tools are essential. In this paper we present an approach for the computerized quantification of osteolytic lesion volumes using a comparison to a healthy reference model. The presented qualitative and quantitative evaluation of the reconstructed bone volumes show, that the automatically segmented lesion volumes complete missing bone in a reasonable way.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.
2012-03-01
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.
Moeskops, Pim; de Bresser, Jeroen; Kuijf, Hugo J; Mendrik, Adriënne M; Biessels, Geert Jan; Pluim, Josien P W; Išgum, Ivana
2018-01-01
Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T 1 -weighted image, a T 2 -weighted fluid attenuated inversion recovery (FLAIR) image and a T 1 -weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge ( n = 20), quantitatively and qualitatively in relatively healthy older subjects ( n = 96), and qualitatively in patients from a memory clinic ( n = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts.
Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma
Dunn, William D.; Aerts, Hugo J.W.L.; Cooper, Lee A.; Holder, Chad A.; Hwang, Scott N.; Jaffe, Carle C.; Brat, Daniel J.; Jain, Rajan; Flanders, Adam E.; Zinn, Pascal O.; Colen, Rivka R.; Gutman, David A.
2017-01-01
Background Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman’s r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses. PMID:29600296
Lundblad, Märit; Lönnqvist, Per-Arne; Eksborg, Staffan; Marhofer, Peter
2011-02-01
The aim of this prospective, age-stratified, observational study was to determine the cranial extent of spread of a large volume (1.5 ml·kg(-1) , ropivacaine 0.2%), single-shot caudal epidural injection using real-time ultrasonography. Fifty ASA I-III children were included in the study, stratified in three age groups; neonates, infants (1-12 months), and toddlers (1-4 years). The caudal blocks were performed during ultrasonographic observation of the spread of local anesthetic (LA) in the epidural space. A significant inverse relationship was found between age, weight, and height, and the maximal cranial level reached by 1.5 ml·kg(-1) of LA. In neonates, 93% of the blocks reached a cranial level of ≥Th12 vs 73% and 25% in infants and toddlers, respectively. Based on our data, a predictive equation of segmental spread was generated: Dose (ml/spinal segment) = 0.1539·(BW in kg)-0.0937. This study found an inverse relationship between age, weight, and height and the number of segments covered by a caudal injection of 1.5 ml·kg(-1) of ropivacaine 0.2% in children 0-4 years of age. However, the cranial spread of local anesthetics within the spinal canal as assessed by immediate ultrasound visualization was found to be in poor agreement with previously published predictive equations that are based on actual cutaneous dermatomal testing. © 2010 Blackwell Publishing Ltd.
Large-Scale Propagation of Ultrasound in a 3-D Breast Model Based on High-Resolution MRI Data
Tillett, Jason C.; Metlay, Leon A.; Waag, Robert C.
2010-01-01
A 40 × 35 × 25-mm3 specimen of human breast consisting mostly of fat and connective tissue was imaged using a 3-T magnetic resonance scanner. The resolutions in the image plane and in the orthogonal direction were 130 μm and 150 μm, respectively. Initial processing to prepare the data for segmentation consisted of contrast inversion, interpolation, and noise reduction. Noise reduction used a multilevel bidirectional median filter to preserve edges. The volume of data was segmented into regions of fat and connective tissue by using a combination of local and global thresholding. Local thresholding was performed to preserve fine detail, while global thresholding was performed to minimize the interclass variance between voxels classified as background and voxels classified as object. After smoothing the data to avoid aliasing artifacts, the segmented data volume was visualized using iso-surfaces. The isosurfaces were enhanced using transparency, lighting, shading, reflectance, and animation. Computations of pulse propagation through the model illustrate its utility for the study of ultrasound aberration. The results show the feasibility of using the described combination of methods to demonstrate tissue morphology in a form that provides insight about the way ultrasound beams are aberrated in three dimensions by tissue. PMID:20172794
Large-scale propagation of ultrasound in a 3-D breast model based on high-resolution MRI data.
Salahura, Gheorghe; Tillett, Jason C; Metlay, Leon A; Waag, Robert C
2010-06-01
A 40 x 35 x 25-mm(3) specimen of human breast consisting mostly of fat and connective tissue was imaged using a 3-T magnetic resonance scanner. The resolutions in the image plane and in the orthogonal direction were 130 microm and 150 microm, respectively. Initial processing to prepare the data for segmentation consisted of contrast inversion, interpolation, and noise reduction. Noise reduction used a multilevel bidirectional median filter to preserve edges. The volume of data was segmented into regions of fat and connective tissue by using a combination of local and global thresholding. Local thresholding was performed to preserve fine detail, while global thresholding was performed to minimize the interclass variance between voxels classified as background and voxels classified as object. After smoothing the data to avoid aliasing artifacts, the segmented data volume was visualized using isosurfaces. The isosurfaces were enhanced using transparency, lighting, shading, reflectance, and animation. Computations of pulse propagation through the model illustrate its utility for the study of ultrasound aberration. The results show the feasibility of using the described combination of methods to demonstrate tissue morphology in a form that provides insight about the way ultrasound beams are aberrated in three dimensions by tissue.
Automatic atlas-based three-label cartilage segmentation from MR knee images
Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc
2016-01-01
Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683
NASA Astrophysics Data System (ADS)
Agn, Mikael; Law, Ian; Munck af Rosenschöld, Per; Van Leemput, Koen
2016-03-01
We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique with a spatial tumor prior, which uses convolutional restricted Boltzmann machines to model tumor shape. The method is not tuned to any specific imaging protocol and can simultaneously segment the gross tumor volume, peritumoral edema and healthy tissue structures relevant for radiotherapy planning. We validate the method on a manually delineated clinical data set of glioblastoma patients by comparing segmentations of gross tumor volume, brainstem and hippocampus. The preliminary results demonstrate the feasibility of the method.
SU-E-J-224: Multimodality Segmentation of Head and Neck Tumors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristophanous, M; Yang, J; Beadle, B
2014-06-01
Purpose: Develop an algorithm that is able to automatically segment tumor volume in Head and Neck cancer by integrating information from CT, PET and MR imaging simultaneously. Methods: Twenty three patients that were recruited under an adaptive radiotherapy protocol had MR, CT and PET/CT scans within 2 months prior to start of radiotherapy. The patients had unresectable disease and were treated either with chemoradiotherapy or radiation therapy alone. Using the Velocity software, the PET/CT and MR (T1 weighted+contrast) scans were registered to the planning CT using deformable and rigid registration respectively. The PET and MR images were then resampled accordingmore » to the registration to match the planning CT. The resampled images, together with the planning CT, were fed into a multi-channel segmentation algorithm, which is based on Gaussian mixture models and solved with the expectation-maximization algorithm and Markov random fields. A rectangular region of interest (ROI) was manually placed to identify the tumor area and facilitate the segmentation process. The auto-segmented tumor contours were compared with the gross tumor volume (GTV) manually defined by the physician. The volume difference and Dice similarity coefficient (DSC) between the manual and autosegmented GTV contours were calculated as the quantitative evaluation metrics. Results: The multimodality segmentation algorithm was applied to all 23 patients. The volumes of the auto-segmented GTV ranged from 18.4cc to 32.8cc. The average (range) volume difference between the manual and auto-segmented GTV was −42% (−32.8%–63.8%). The average DSC value was 0.62, ranging from 0.39 to 0.78. Conclusion: An algorithm for the automated definition of tumor volume using multiple imaging modalities simultaneously was successfully developed and implemented for Head and Neck cancer. This development along with more accurate registration algorithms can aid physicians in the efforts to interpret the multitude of imaging information available in radiotherapy today. This project was supported by a grant by Varian Medical Systems.« less
Anterior segment sparing to reduce charged particle radiotherapy complications in uveal melanoma
NASA Technical Reports Server (NTRS)
Daftari, I. K.; Char, D. H.; Verhey, L. J.; Castro, J. R.; Petti, P. L.; Meecham, W. J.; Kroll, S.; Blakely, E. A.; Chatterjee, A. (Principal Investigator)
1997-01-01
PURPOSE: The purpose of this investigation is to delineate the risk factors in the development of neovascular glaucoma (NVG) after helium-ion irradiation of uveal melanoma patients and to propose treatment technique that may reduce this risk. METHODS AND MATERIALS: 347 uveal melanoma patients were treated with helium-ions using a single-port treatment technique. Using univariate and multivariate statistics, the NVG complication rate was analyzed according to the percent of anterior chamber in the radiation field, tumor size, tumor location, sex, age, dose, and other risk factors. Several University of California San Francisco-Lawrence Berkeley National Laboratory (LBNL) patients in each size category (medium, large, and extralarge) were retrospectively replanned using two ports instead of a single port. By using appropriate polar and azimuthal gaze angles or by treating patients with two ports, the maximum dose to the anterior segment of the eye can often be reduced. Although a larger volume of anterior chamber may receive a lower dose by using two ports than a single port treatment. We hypothesize that this could reduce the level of complications that result from the irradiation of the anterior chamber of the eye. Dose-volume histograms were calculated for the lens, and compared for the single and two-port techniques. RESULTS: NVG developed in 121 (35%) patients. The risk of NVG peaked between 1 and 2.5 years posttreatment. By univariate and multivariate analysis, the percent of lens in the field was strongly correlated with the development of NVG. Other contributing factors were tumor height, history of diabetes, and vitreous hemorrhage. Dose-volume histogram analysis of single-port vs. two-port techniques demonstrate that for some patients in the medium and large category tumor groups, a significant decrease in dose to the structures in the anterior segment of the eye could have been achieved with the use of two ports. CONCLUSION: The development of NVG after helium-ion irradiation is correlated to the amount of lens, anterior chamber in the treatment field, tumor height, proximity to the fovea, history of diabetes, and the development of vitreous hemorrhage. Although the influence of the higher LET deposition of helium-ions is unclear, this study suggests that by reducing the dose to the anterior segment of the eye may reduce the NVG complications. Based on this retrospective analysis of LBNL patients, we have implemented techniques to reduce the amount of the anterior segment receiving a high dose in our new series of patients treated with protons using the cyclotron at the UC Davis Crocker Nuclear Laboratory (CNL).
Automatic initialization and quality control of large-scale cardiac MRI segmentations.
Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F
2018-01-01
Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Combining multi-atlas segmentation with brain surface estimation
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Carass, Aaron; Resnick, Susan M.; Pham, Dzung L.; Prince, Jerry L.; Landman, Bennett A.
2016-03-01
Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitation in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
Combining Multi-atlas Segmentation with Brain Surface Estimation.
Huo, Yuankai; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A
2016-02-27
Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitations in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.
NASA Astrophysics Data System (ADS)
Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur
2017-06-01
In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.
NASA Astrophysics Data System (ADS)
Meng, Qier; Kitasaka, Takayuki; Oda, Masahiro; Mori, Kensaku
2017-03-01
Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining an integrated 3-D airway tree structure from a CT volume is a quite challenging task. This paper presents a novel airway segmentation method based on intensity structure analysis and bronchi shape structure analysis in volume of interest (VOI). This method segments the bronchial regions by applying the cavity enhancement filter (CEF) to trace the bronchial tree structure from the trachea. It uses the CEF in each VOI to segment each branch and to predict the positions of VOIs which envelope the bronchial regions in next level. At the same time, a leakage detection is performed to avoid the leakage by analysing the pixel information and the shape information of airway candidate regions extracted in the VOI. Bronchial regions are finally obtained by unifying the extracted airway regions. The experiments results showed that the proposed method can extract most of the bronchial region in each VOI and led good results of the airway segmentation.
Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni; Shimizu, Akinobu
2017-03-01
A fully automatic multiatlas-based method for segmentation of the spine and pelvis in a torso CT volume is proposed. A novel landmark-guided diffeomorphic demons algorithm is used to register a given CT image to multiple atlas volumes. This algorithm can utilize both grayscale image information and given landmark coordinate information optimally. The segmentation has four steps. Firstly, 170 bony landmarks are detected in the given volume. Using these landmark positions, an atlas selection procedure is performed to reduce the computational cost of the following registration. Then the chosen atlas volumes are registered to the given CT image. Finally, voxelwise label voting is performed to determine the final segmentation result. The proposed method was evaluated using 50 torso CT datasets as well as the public SpineWeb dataset. As a result, a mean distance error of [Formula: see text] and a mean Dice coefficient of [Formula: see text] were achieved for the whole spine and the pelvic bones, which are competitive with other state-of-the-art methods. From the experimental results, the usefulness of the proposed segmentation method was validated.
The pinwheel pupil discovery: exoplanet science & improved processing with segmented telescopes
NASA Astrophysics Data System (ADS)
Breckinridge, James Bernard
2018-01-01
In this paper, we show that by using a “pinwheel” architecture for the segmented primary mirror and curved supports for the secondary mirror, we can achieve a near uniform diffraction background in ground and space large telescope systems needed for high SNR exoplanet science. Also, the point spread function will be nearly rotationally symmetric, enabling improved digital image reconstruction. Large (>4-m) aperture space telescopes are needed to characterize terrestrial exoplanets by direct imaging coronagraphy. Launch vehicle volume constrains these apertures are segmented and deployed in space to form a large mirror aperture that is masked by the gaps between the hexagonal segments and the shadows of the secondary support system. These gaps and shadows over the pupil result in an image plane point spread function that has bright spikes, which may mask or obscure exoplanets.These telescope artifact mask faint exoplanets, making it necessary for the spacecraft to make a roll about the boresight and integrate again to make sure no planets are missed. This increases integration time, and requires expensive space-craft resources to do bore-sight roll.Currently the LUVOIR and HabEx studies have several significant efforts to develop special purpose A/O technology and to place complex absorbing apodizers over their Hex pupils to shape the unwanted diffracted light. These strong apodizers absorb light, decreasing system transmittance and reducing SNR. Implementing curved pupil obscurations will eliminate the need for the highly absorbing apodizers and thus result in higher SNR.Quantitative analysis of diffraction patterns that use the pinwheel architecture are compared to straight hex-segment edges with a straight-line secondary shadow mask to show a gain of over a factor of 100 by reducing the background. For the first-time astronomers are able to control and minimize image plane diffraction background “noise”. This technology will enable 10-m segmented apertures to perform nearly the same as a 10-meter monolith filled aperture. The pinwheel pupil will enable a significant gain in exoplanet SNR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, P; Labby, Z; Bayliss, R A
Purpose: To develop a plan comparison tool that will ensure robustness and deliverability through analysis of baseline and online-adaptive radiotherapy plans using similarity metrics. Methods: The ViewRay MRIdian treatment planning system allows export of a plan file that contains plan and delivery information. A software tool was developed to read and compare two plans, providing information and metrics to assess their similarity. In addition to performing direct comparisons (e.g. demographics, ROI volumes, number of segments, total beam-on time), the tool computes and presents histograms of derived metrics (e.g. step-and-shoot segment field sizes, segment average leaf gaps). Such metrics were investigatedmore » for their ability to predict that an online-adapted plan reasonably similar to a baseline plan where deliverability has already been established. Results: In the realm of online-adaptive planning, comparing ROI volumes offers a sanity check to verify observations found during contouring. Beyond ROI analysis, it has been found that simply editing contours and re-optimizing to adapt treatment can produce a delivery that is substantially different than the baseline plan (e.g. number of segments increased by 31%), with no changes in optimization parameters and only minor changes in anatomy. Currently the tool can quickly identify large omissions or deviations from baseline expectations. As our online-adaptive patient population increases, we will continue to develop and refine quantitative acceptance criteria for adapted plans and relate them historical delivery QA measurements. Conclusion: The plan comparison tool is in clinical use and reports a wide range of comparison metrics, illustrating key differences between two plans. This independent check is accomplished in seconds and can be performed in parallel to other tasks in the online-adaptive workflow. Current use prevents large planning or delivery errors from occurring, and ongoing refinements will lead to increased assurance of plan quality.« less
NASA Astrophysics Data System (ADS)
Shahzad, Rahil; Bos, Daniel; Budde, Ricardo P. J.; Pellikaan, Karlijn; Niessen, Wiro J.; van der Lugt, Aad; van Walsum, Theo
2017-05-01
Early structural changes to the heart, including the chambers and the coronary arteries, provide important information on pre-clinical heart disease like cardiac failure. Currently, contrast-enhanced cardiac computed tomography angiography (CCTA) is the preferred modality for the visualization of the cardiac chambers and the coronaries. In clinical practice not every patient undergoes a CCTA scan; many patients receive only a non-contrast-enhanced calcium scoring CT scan (CTCS), which has less radiation dose and does not require the administration of contrast agent. Quantifying cardiac structures in such images is challenging, as they lack the contrast present in CCTA scans. Such quantification would however be relevant, as it enables population based studies with only a CTCS scan. The purpose of this work is therefore to investigate the feasibility of automatic segmentation and quantification of cardiac structures viz whole heart, left atrium, left ventricle, right atrium, right ventricle and aortic root from CTCS scans. A fully automatic multi-atlas-based segmentation approach is used to segment the cardiac structures. Results show that the segmentation overlap between the automatic method and that of the reference standard have a Dice similarity coefficient of 0.91 on average for the cardiac chambers. The mean surface-to-surface distance error over all the cardiac structures is 1.4+/- 1.7 mm. The automatically obtained cardiac chamber volumes using the CTCS scans have an excellent correlation when compared to the volumes in corresponding CCTA scans, a Pearson correlation coefficient (R) of 0.95 is obtained. Our fully automatic method enables large-scale assessment of cardiac structures on non-contrast-enhanced CT scans.
NASA Technical Reports Server (NTRS)
Tseng, B. S.; Kasper, C. E.; Edgerton, V. R.
1994-01-01
The relationship between myonuclear number, cellular size, succinate dehydrogenase activity, and myosin type was examined in single fiber segments (n = 54; 9 +/- 3 mm long) mechanically dissected from soleus and plantaris muscles of adult rats. One end of each fiber segment was stained for DNA before quantitative photometric analysis of succinate dehydrogenase activity; the other end was double immunolabeled with fast and slow myosin heavy chain monoclonal antibodies. Mean +/- S.D. cytoplasmic volume/myonucleus ratio was higher in fast and slow plantaris fibers (112 +/- 69 vs. 34 +/- 21 x 10(3) microns3) than fast and slow soleus fibers (40 +/- 20 vs. 30 +/- 14 x 10(3) microns3), respectively. Slow fibers always had small volumes/myonucleus, regardless of fiber diameter, succinate dehydrogenase activity, or muscle of origin. In contrast, smaller diameter (< 70 microns) fast soleus and plantaris fibers with high succinate dehydrogenase activity appeared to have low volumes/myonucleus while larger diameter (> 70 microns) fast fibers with low succinate dehydrogenase activity always had large volume/myonucleus. Slow soleus fibers had significantly greater numbers of myonuclei/mm than did either fast soleus or fast plantaris fibers (116 +/- 51 vs. 55 +/- 22 and 44 +/- 23), respectively. These data suggest that the myonuclear domain is more limited in slow than fast fibers and in the fibers with a high, compared to a low, oxidative metabolic capability.
Superpixel guided active contour segmentation of retinal layers in OCT volumes
NASA Astrophysics Data System (ADS)
Bai, Fangliang; Gibson, Stuart J.; Marques, Manuel J.; Podoleanu, Adrian
2018-03-01
Retinal OCT image segmentation is a precursor to subsequent medical diagnosis by a clinician or machine learning algorithm. In the last decade, many algorithms have been proposed to detect retinal layer boundaries and simplify the image representation. Inspired by the recent success of superpixel methods for pre-processing natural images, we present a novel framework for segmentation of retinal layers in OCT volume data. In our framework, the region of interest (e.g. the fovea) is located using an adaptive-curve method. The cell layer boundaries are then robustly detected firstly using 1D superpixels, applied to A-scans, and then fitting active contours in B-scan images. Thereafter the 3D cell layer surfaces are efficiently segmented from the volume data. The framework was tested on healthy eye data and we show that it is capable of segmenting up to 12 layers. The experimental results imply the effectiveness of proposed method and indicate its robustness to low image resolution and intrinsic speckle noise.
Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation.
Alex, Varghese; Vaidhya, Kiran; Thirunavukkarasu, Subramaniam; Kesavadas, Chandrasekharan; Krishnamurthi, Ganapathy
2017-10-01
The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.
Automatic short axis orientation of the left ventricle in 3D ultrasound recordings
NASA Astrophysics Data System (ADS)
Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan
2016-04-01
The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.
Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi
2015-07-01
Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. Copyright © 2015 Elsevier Inc. All rights reserved.
Sanchis-Moysi, Joaquin; Idoate, Fernando; Izquierdo, Mikel; Calbet, Jose A; Dorado, Cecilia
2013-03-01
The aim was to determine the volume and degree of asymmetry of quadratus lumborum (QL), obliques, and transversus abdominis; the last two considered conjointly (OT), in tennis and soccer players. The volume of QL and OT was determined using magnetic resonance imaging in professional tennis and soccer players, and in non-active controls (n = 8, 14, and 6, respectively). In tennis players the hypertrophy of OT was limited to proximal segments (cephalic segments), while in soccer players it was similar along longitudinal axis. In tennis players the hypertrophy was asymmetric (18% greater volume in the non-dominant than in the dominant OT, p = 0.001), while in soccer players and controls both sides had similar volumes (p > 0.05). In controls, the non-dominant QL was 15% greater than that of the dominant (p = 0.049). Tennis and soccer players had similar volumes in both sides of QL. Tennis alters the dominant-to-non-dominant balance in the muscle volume of the lateral abdominal wall. In tennis the hypertrophy is limited to proximal segments and is greater in the non-dominant side. Soccer, however, is associated to a symmetric hypertrophy of the lateral abdominal wall. Tennis and soccer elicit an asymmetric hypertrophy of QL.
Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming
2018-02-19
The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.
Antila, Kari; Nieminen, Heikki J; Sequeiros, Roberto Blanco; Ehnholm, Gösta
2014-07-01
Up to 25% of women suffer from uterine fibroids (UF) that cause infertility, pain, and discomfort. MR-guided high intensity focused ultrasound (MR-HIFU) is an emerging technique for noninvasive, computer-guided thermal ablation of UFs. The volume of induced necrosis is a predictor of the success of the treatment. However, accurate volume assessment by hand can be time consuming, and quick tools produce biased results. Therefore, fast and reliable tools are required in order to estimate the technical treatment outcome during the therapy event so as to predict symptom relief. A novel technique has been developed for the segmentation and volume assessment of the treated region. Conventional algorithms typically require user interaction ora priori knowledge of the target. The developed algorithm exploits the treatment plan, the coordinates of the intended ablation, for fully automatic segmentation with no user input. A good similarity to an expert-segmented manual reference was achieved (Dice similarity coefficient = 0.880 ± 0.074). The average automatic segmentation time was 1.6 ± 0.7 min per patient against an order of tens of minutes when done manually. The results suggest that the segmentation algorithm developed, requiring no user-input, provides a feasible and practical approach for the automatic evaluation of the boundary and volume of the HIFU-treated region.
Tooth segmentation system with intelligent editing for cephalometric analysis
NASA Astrophysics Data System (ADS)
Chen, Shoupu
2015-03-01
Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.
Nilsson, Henrik; Blomqvist, Lennart; Douglas, Lena; Nordell, Anders; Jacobsson, Hans; Hagen, Karin; Bergquist, Annika; Jonas, Eduard
2014-04-01
To evaluate dynamic hepatocyte-specific contrast-enhanced MRI (DHCE-MRI) for the assessment of global and segmental liver volume and function in patients with primary sclerosing cholangitis (PSC), and to explore the heterogeneous distribution of liver function in this patient group. Twelve patients with primary sclerosing cholangitis (PSC) and 20 healthy volunteers were examined using DHCE-MRI with Gd-EOB-DTPA. Segmental and total liver volume were calculated, and functional parameters (hepatic extraction fraction [HEF], input relative blood-flow [irBF], and mean transit time [MTT]) were calculated in each liver voxel using deconvolutional analysis. In each study subject, and incongruence score (IS) was constructed to describe the mismatch between segmental function and volume. Among patients, the liver function parameters were correlated to bile duct obstruction and to established scoring models for liver disease. Liver function was significantly more heterogeneously distributed in the patient group (IS 1.0 versus 0.4). There were significant correlations between biliary obstruction and segmental functional parameters (HEF rho -0.24; irBF rho -0.45), and the Mayo risk score correlated significantly with the total liver extraction capacity of Gd-EOB-DTPA (rho -0.85). The study demonstrates a new method to quantify total and segmental liver function using DHCE-MRI in patients with PSC. Copyright © 2013 Wiley Periodicals, Inc.
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang; ...
2017-04-20
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed in this paper. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humiditymore » differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Finally, clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.« less
NASA Astrophysics Data System (ADS)
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang; Lu, Chunsong
2017-09-01
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humidity differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed in this paper. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humiditymore » differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Finally, clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.« less
Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L
2015-11-18
Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.
Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui; Zhou, Zhengyang; Yu, David S; Beitler, Jonathan J; Curran, Walter J; Liu, Tian
2014-12-01
To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RT MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy. Copyright © 2014 Elsevier Inc. All rights reserved.
Rios Velazquez, Emmanuel; Aerts, Hugo J W L; Gu, Yuhua; Goldgof, Dmitry B; De Ruysscher, Dirk; Dekker, Andre; Korn, René; Gillies, Robert J; Lambin, Philippe
2012-11-01
To assess the clinical relevance of a semiautomatic CT-based ensemble segmentation method, by comparing it to pathology and to CT/PET manual delineations by five independent radiation oncologists in non-small cell lung cancer (NSCLC). For 20 NSCLC patients (stages Ib-IIIb) the primary tumor was delineated manually on CT/PET scans by five independent radiation oncologists and segmented using a CT based semi-automatic tool. Tumor volume and overlap fractions between manual and semiautomatic-segmented volumes were compared. All measurements were correlated with the maximal diameter on macroscopic examination of the surgical specimen. Imaging data are available on www.cancerdata.org. High overlap fractions were observed between the semi-automatically segmented volumes and the intersection (92.5±9.0, mean±SD) and union (94.2±6.8) of the manual delineations. No statistically significant differences in tumor volume were observed between the semiautomatic segmentation (71.4±83.2 cm(3), mean±SD) and manual delineations (81.9±94.1 cm(3); p=0.57). The maximal tumor diameter of the semiautomatic-segmented tumor correlated strongly with the macroscopic diameter of the primary tumor (r=0.96). Semiautomatic segmentation of the primary tumor on CT demonstrated high agreement with CT/PET manual delineations and strongly correlated with the macroscopic diameter considered as the "gold standard". This method may be used routinely in clinical practice and could be employed as a starting point for treatment planning, target definition in multi-center clinical trials or for high throughput data mining research. This method is particularly suitable for peripherally located tumors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Automatically measuring brain ventricular volume within PACS using artificial intelligence.
Yepes-Calderon, Fernando; Nelson, Marvin D; McComb, J Gordon
2018-01-01
The picture archiving and communications system (PACS) is currently the standard platform to manage medical images but lacks analytical capabilities. Staying within PACS, the authors have developed an automatic method to retrieve the medical data and access it at a voxel level, decrypted and uncompressed that allows analytical capabilities while not perturbing the system's daily operation. Additionally, the strategy is secure and vendor independent. Cerebral ventricular volume is important for the diagnosis and treatment of many neurological disorders. A significant change in ventricular volume is readily recognized, but subtle changes, especially over longer periods of time, may be difficult to discern. Clinical imaging protocols and parameters are often varied making it difficult to use a general solution with standard segmentation techniques. Presented is a segmentation strategy based on an algorithm that uses four features extracted from the medical images to create a statistical estimator capable of determining ventricular volume. When compared with manual segmentations, the correlation was 94% and holds promise for even better accuracy by incorporating the unlimited data available. The volume of any segmentable structure can be accurately determined utilizing the machine learning strategy presented and runs fully automatically within the PACS.
Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images
NASA Astrophysics Data System (ADS)
Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.
1994-05-01
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.
Singh, Ranjodh; Zhou, Zhiping; Tisnado, Jamie; Haque, Sofia; Peck, Kyung K; Young, Robert J; Tsiouris, Apostolos John; Thakur, Sunitha B; Souweidane, Mark M
2016-11-01
OBJECTIVE Accurately determining diffuse intrinsic pontine glioma (DIPG) tumor volume is clinically important. The aims of the current study were to 1) measure DIPG volumes using methods that require different degrees of subjective judgment; and 2) evaluate interobserver agreement of measurements made using these methods. METHODS Eight patients from a Phase I clinical trial testing convection-enhanced delivery (CED) of a therapeutic antibody were included in the study. Pre-CED, post-radiation therapy axial T2-weighted images were analyzed using 2 methods requiring high degrees of subjective judgment (picture archiving and communication system [PACS] polygon and Volume Viewer auto-contour methods) and 1 method requiring a low degree of subjective judgment (k-means clustering segmentation) to determine tumor volumes. Lin's concordance correlation coefficients (CCCs) were calculated to assess interobserver agreement. RESULTS The CCCs of measurements made by 2 observers with the PACS polygon and the Volume Viewer auto-contour methods were 0.9465 (lower 1-sided 95% confidence limit 0.8472) and 0.7514 (lower 1-sided 95% confidence limit 0.3143), respectively. Both were considered poor agreement. The CCC of measurements made using k-means clustering segmentation was 0.9938 (lower 1-sided 95% confidence limit 0.9772), which was considered substantial strength of agreement. CONCLUSIONS The poor interobserver agreement of PACS polygon and Volume Viewer auto-contour methods highlighted the difficulty in consistently measuring DIPG tumor volumes using methods requiring high degrees of subjective judgment. k-means clustering segmentation, which requires a low degree of subjective judgment, showed better interobserver agreement and produced tumor volumes with delineated borders.
NASA Astrophysics Data System (ADS)
Xie, Shijie; Schweizer, Kenneth
Recently, Cheng, Sokolov and coworkers have discovered qualitatively new dynamic behavior (exceptionally large Tg and fragility increases, unusual thermal and viscoelastic responses) in polymer nanocomposites composed of nanoparticles comparable in size to a polymer segment which form physical bonds with both themselves and segments. We generalize the Elastically Collective Nonlinear Langevin Equation theory of deeply supercooled molecular and polymer liquids to study the cooperative activated hopping dynamics of this system based on the dynamic free energy surface concept. The theoretical calculations are consistent with segmental relaxation time measurements as a function of temperature and nanoparticle volume fraction, and also the nearly linear growth of Tg with NP loading; predictions are made for the influence of nonuniversal chemical effects. The theory suggests the alpha process involves strongly coupled activated motion of segments and nanoparticles, consistent with the observed negligible change of the heat capacity jump with filler loading. Based on cohesive energy calculations and transient network ideas, full structural relaxation is suggested to involve a second, slower bond dissociation process with distinctive features and implications.
A statistical method for lung tumor segmentation uncertainty in PET images based on user inference.
Zheng, Chaojie; Wang, Xiuying; Feng, Dagan
2015-01-01
PET has been widely accepted as an effective imaging modality for lung tumor diagnosis and treatment. However, standard criteria for delineating tumor boundary from PET are yet to develop largely due to relatively low quality of PET images, uncertain tumor boundary definition, and variety of tumor characteristics. In this paper, we propose a statistical solution to segmentation uncertainty on the basis of user inference. We firstly define the uncertainty segmentation band on the basis of segmentation probability map constructed from Random Walks (RW) algorithm; and then based on the extracted features of the user inference, we use Principle Component Analysis (PCA) to formulate the statistical model for labeling the uncertainty band. We validated our method on 10 lung PET-CT phantom studies from the public RIDER collections [1] and 16 clinical PET studies where tumors were manually delineated by two experienced radiologists. The methods were validated using Dice similarity coefficient (DSC) to measure the spatial volume overlap. Our method achieved an average DSC of 0.878 ± 0.078 on phantom studies and 0.835 ± 0.039 on clinical studies.
Automatic segmentation of left ventricle in cardiac cine MRI images based on deep learning
NASA Astrophysics Data System (ADS)
Zhou, Tian; Icke, Ilknur; Dogdas, Belma; Parimal, Sarayu; Sampath, Smita; Forbes, Joseph; Bagchi, Ansuman; Chin, Chih-Liang; Chen, Antong
2017-02-01
In developing treatment of cardiovascular diseases, short axis cine MRI has been used as a standard technique for understanding the global structural and functional characteristics of the heart, e.g. ventricle dimensions, stroke volume and ejection fraction. To conduct an accurate assessment, heart structures need to be segmented from the cine MRI images with high precision, which could be a laborious task when performed manually. Herein a fully automatic framework is proposed for the segmentation of the left ventricle from the slices of short axis cine MRI scans of porcine subjects using a deep learning approach. For training the deep learning models, which generally requires a large set of data, a public database of human cine MRI scans is used. Experiments on the 3150 cine slices of 7 porcine subjects have shown that when comparing the automatic and manual segmentations the mean slice-wise Dice coefficient is about 0.930, the point-to-curve error is 1.07 mm, and the mean slice-wise Hausdorff distance is around 3.70 mm, which demonstrates the accuracy and robustness of the proposed inter-species translational approach.
Optimal retinal cyst segmentation from OCT images
NASA Astrophysics Data System (ADS)
Oguz, Ipek; Zhang, Li; Abramoff, Michael D.; Sonka, Milan
2016-03-01
Accurate and reproducible segmentation of cysts and fluid-filled regions from retinal OCT images is an important step allowing quantification of the disease status, longitudinal disease progression, and response to therapy in wet-pathology retinal diseases. However, segmentation of fluid-filled regions from OCT images is a challenging task due to their inhomogeneous appearance, the unpredictability of their number, size and location, as well as the intensity profile similarity between such regions and certain healthy tissue types. While machine learning techniques can be beneficial for this task, they require large training datasets and are often over-fitted to the appearance models of specific scanner vendors. We propose a knowledge-based approach that leverages a carefully designed cost function and graph-based segmentation techniques to provide a vendor-independent solution to this problem. We illustrate the results of this approach on two publicly available datasets with a variety of scanner vendors and retinal disease status. Compared to a previous machine-learning based approach, the volume similarity error was dramatically reduced from 81:3+/-56:4% to 22:2+/-21:3% (paired t-test, p << 0:001).
NASA Astrophysics Data System (ADS)
Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu
2006-03-01
This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.
Hori, Daisuke; Katsuragawa, Shigehiko; Murakami, Ryuuji; Hirai, Toshinori
2010-04-20
We propose a computerized method for semi-automated segmentation of the gross tumor volume (GTV) of a glioblastoma multiforme (GBM) on brain MR images for radiotherapy planning (RTP). Three-dimensional (3D) MR images of 28 cases with a GBM were used in this study. First, a sphere volume of interest (VOI) including the GBM was selected by clicking a part of the GBM region in the 3D image. Then, the sphere VOI was transformed to a two-dimensional (2D) image by use of a spiral-scanning technique. We employed active contour models (ACM) to delineate an optimal outline of the GBM in the transformed 2D image. After inverse transform of the optimal outline to the 3D space, a morphological filter was applied to smooth the shape of the 3D segmented region. For evaluation of our computerized method, we compared the computer output with manually segmented regions, which were obtained by a therapeutic radiologist using a manual tracking method. In evaluating our segmentation method, we employed the Jaccard similarity coefficient (JSC) and the true segmentation coefficient (TSC) in volumes between the computer output and the manually segmented region. The mean and standard deviation of JSC and TSC were 74.2+/-9.8% and 84.1+/-7.1%, respectively. Our segmentation method provided a relatively accurate outline for GBM and would be useful for radiotherapy planning.
NASA Astrophysics Data System (ADS)
Cheng, Guanghui; Yang, Xiaofeng; Wu, Ning; Xu, Zhijian; Zhao, Hongfu; Wang, Yuefeng; Liu, Tian
2013-02-01
Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.
Molar axis estimation from computed tomography images.
Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li
2016-08-01
Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.
Merging Surface Reconstructions of Terrestrial and Airborne LIDAR Range Data
2009-05-19
Mangan and R. Whitaker. Partitioning 3D surface meshes using watershed segmentation . IEEE Trans. on Visualization and Computer Graphics, 5(4), pp...Jain, and A. Zakhor. Data Processing Algorithms for Generating Textured 3D Building Facade Meshes from Laser Scans and Camera Images. International...acquired set of overlapping range images into a single mesh [2,9,10]. However, due to the volume of data involved in large scale urban modeling, data
Transportable Maps Software. Volume I.
1982-07-01
being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records
SU-F-J-113: Multi-Atlas Based Automatic Organ Segmentation for Lung Radiotherapy Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Han, J; Ailawadi, S
Purpose: Normal organ segmentation is one time-consuming and labor-intensive step for lung radiotherapy treatment planning. The aim of this study is to evaluate the performance of a multi-atlas based segmentation approach for automatic organs at risk (OAR) delineation. Methods: Fifteen Lung stereotactic body radiation therapy patients were randomly selected. Planning CT images and OAR contours of the heart - HT, aorta - AO, vena cava - VC, pulmonary trunk - PT, and esophagus – ES were exported and used as reference and atlas sets. For automatic organ delineation for a given target CT, 1) all atlas sets were deformably warpedmore » to the target CT, 2) the deformed sets were accumulated and normalized to produce organ probability density (OPD) maps, and 3) the OPD maps were converted to contours via image thresholding. Optimal threshold for each organ was empirically determined by comparing the auto-segmented contours against their respective reference contours. The delineated results were evaluated by measuring contour similarity metrics: DICE, mean distance (MD), and true detection rate (TD), where DICE=(intersection volume/sum of two volumes) and TD = {1.0 - (false positive + false negative)/2.0}. Diffeomorphic Demons algorithm was employed for CT-CT deformable image registrations. Results: Optimal thresholds were determined to be 0.53 for HT, 0.38 for AO, 0.28 for PT, 0.43 for VC, and 0.31 for ES. The mean similarity metrics (DICE[%], MD[mm], TD[%]) were (88, 3.2, 89) for HT, (79, 3.2, 82) for AO, (75, 2.7, 77) for PT, (68, 3.4, 73) for VC, and (51,2.7, 60) for ES. Conclusion: The investigated multi-atlas based approach produced reliable segmentations for the organs with large and relatively clear boundaries (HT and AO). However, the detection of small and narrow organs with diffused boundaries (ES) were challenging. Sophisticated atlas selection and multi-atlas fusion algorithms may further improve the quality of segmentations.« less
Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A
2008-06-01
The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.
NASA Astrophysics Data System (ADS)
Zhu, Weifang; Zhang, Li; Shi, Fei; Xiang, Dehui; Wang, Lirong; Guo, Jingyun; Yang, Xiaoling; Chen, Haoyu; Chen, Xinjian
2017-07-01
Cystoid macular edema (CME) and macular hole (MH) are the leading causes for visual loss in retinal diseases. The volume of the CMEs can be an accurate predictor for visual prognosis. This paper presents an automatic method to segment the CMEs from the abnormal retina with coexistence of MH in three-dimensional-optical coherence tomography images. The proposed framework consists of preprocessing and CMEs segmentation. The preprocessing part includes denoising, intraretinal layers segmentation and flattening, and MH and vessel silhouettes exclusion. In the CMEs segmentation, a three-step strategy is applied. First, an AdaBoost classifier trained with 57 features is employed to generate the initialization results. Second, an automated shape-constrained graph cut algorithm is applied to obtain the refined results. Finally, cyst area information is used to remove false positives (FPs). The method was evaluated on 19 eyes with coexistence of CMEs and MH from 18 subjects. The true positive volume fraction, FP volume fraction, dice similarity coefficient, and accuracy rate for CMEs segmentation were 81.0%±7.8%, 0.80%±0.63%, 80.9%±5.7%, and 99.7%±0.1%, respectively.
NASA Astrophysics Data System (ADS)
Pura, John A.; Hamilton, Allison M.; Vargish, Geoffrey A.; Butman, John A.; Linguraru, Marius George
2011-03-01
Accurate ventricle volume estimates could improve the understanding and diagnosis of postoperative communicating hydrocephalus. For this category of patients, associated changes in ventricle volume can be difficult to identify, particularly over short time intervals. We present an automated segmentation algorithm that evaluates ventricle size from serial brain MRI examination. The technique combines serial T1- weighted images to increase SNR and segments the means image to generate a ventricle template. After pre-processing, the segmentation is initiated by a fuzzy c-means clustering algorithm to find the seeds used in a combination of fast marching methods and geodesic active contours. Finally, the ventricle template is propagated onto the serial data via non-linear registration. Serial volume estimates were obtained in an automated robust and accurate manner from difficult data.
NASA Astrophysics Data System (ADS)
Alboabidallah, Ahmed; Martin, John; Lavender, Samantha; Abbott, Victor
2017-09-01
Terrestrial Laser Scanning (TLS) processing for biomass mapping involves large data volumes, and often includes relatively slow 3D object fitting steps that increase the processing time. This study aimed to test new features that can speed up the overall processing time. A new type of 3D voxel is used, where the horizontal layers are parallel to the Digital Terrain Model. This voxel type allows procedures to extract tree diameters using just one layer, but still gives direct tree-height estimations. Layer intersection is used to emphasize the trunks as upright standing objects, which are detected in the spatially segmented intersection of the breast-height voxels and then extended upwards and downwards. The diameters were calculated by fitting elliptical cylinders to the laser points in the detected trunk segments. Non-trunk segments, used in sub-tree- structures, were found using the parent-child relationships between successive layers. The branches were reconstructed by skeletonizing each sub-tree branch, and the biomass was distributed statistically amongst the weighted skeletons. The procedure was applied to nine plots within the UK. The average correlation coefficients between reconstructed and directly measured tree diameters, heights and branches were R2 = 0.92, 0.97 and 0.59 compared to 0.91, 0.95, and 0.63 when cylindrical fitting was used. The average time to apply the method reduced from 5hrs:18mins per plot, for the conventional methods, to 2hrs:24mins when the same hardware and software libraries were used with the 3D voxels. These results indicate that this 3D voxel method can produce, much more quickly, results of a similar accuracy that would improve efficiency if applied to projects with large volume TLS datasets.
Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images.
Lee, Kyungmoo; Buitendijk, Gabriëlle H S; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R; Klaver, Caroline C W; Abràmoff, Michael D
2016-03-01
To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm 3 ) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC ( P < 0.01). The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies.
Extreme liver resections with preservation of segment 4 only
Balzan, Silvio Marcio Pegoraro; Gava, Vinícius Grando; Magalhães, Marcelo Arbo; Dotto, Marcelo Luiz
2017-01-01
AIM To evaluate safety and outcomes of a new technique for extreme hepatic resections with preservation of segment 4 only. METHODS The new method of extreme liver resection consists of a two-stage hepatectomy. The first stage involves a right hepatectomy with middle hepatic vein preservation and induction of left lobe congestion; the second stage involves a left lobectomy. Thus, the remnant liver is represented by the segment 4 only (with or without segment 1, ± S1). Five patients underwent the new two-stage hepatectomy (congestion group). Data from volumetric assessment made before the second stage was compared with that of 10 matched patients (comparison group) that underwent a single-stage right hepatectomy with middle hepatic vein preservation. RESULTS The two stages of the procedure were successfully carried out on all 5 patients. For the congestion group, the overall volume of the left hemiliver had increased 103% (mean increase from 438 mL to 890 mL) at 4 wk after the first stage of the procedure. Hypertrophy of the future liver remnant (i.e., segment 4 ± S1) was higher than that of segments 2 and 3 (144% vs 54%, respectively, P < 0.05). The median remnant liver volume-to-body weight ratio was 0.3 (range, 0.28-0.40) before the first stage and 0.8 (range, 0.45-0.97) before the second stage. For the comparison group, the rate of hypertrophy of the left liver after right hepatectomy with middle hepatic vein preservation was 116% ± 34%. Hypertrophy rates of segments 2 and 3 (123% ± 47%) and of segment 4 (108% ± 60%, P > 0.05) were proportional. The mean preoperative volume of segments 2 and 3 was 256 ± 64 cc and increased to 572 ± 257 cc after right hepatectomy. Mean preoperative volume of segment 4 increased from 211 ± 75 cc to 439 ± 180 cc after surgery. CONCLUSION The proposed method for extreme hepatectomy with preservation of segment 4 only represents a technique that could allow complete resection of multiple bilateral liver metastases. PMID:28765703
Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren
2015-12-01
To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.
Magma-maintained rift segmentation at continental rupture in the 2005 Afar dyking episode.
Wright, Tim J; Ebinger, Cindy; Biggs, Juliet; Ayele, Atalay; Yirgu, Gezahegn; Keir, Derek; Stork, Anna
2006-07-20
Seafloor spreading centres show a regular along-axis segmentation thought to be produced by a segmented magma supply in the passively upwelling mantle. On the other hand, continental rifts are segmented by large offset normal faults, and many lack magmatism. It is unclear how, when and where the ubiquitous segmented melt zones are emplaced during the continental rupture process. Between 14 September and 4 October 2005, 163 earthquakes (magnitudes greater than 3.9) and a volcanic eruption occurred within the approximately 60-km-long Dabbahu magmatic segment of the Afar rift, a nascent seafloor spreading centre in stretched continental lithosphere. Here we present a three-dimensional deformation field for the Dabbahu rifting episode derived from satellite radar data, which shows that the entire segment ruptured, making it the largest to have occurred on land in the era of satellite geodesy. Simple elastic modelling shows that the magmatic segment opened by up to 8 m, yet seismic rupture can account for only 8 per cent of the observed deformation. Magma was injected along a dyke between depths of 2 and 9 km, corresponding to a total intrusion volume of approximately 2.5 km3. Much of the magma appears to have originated from shallow chambers beneath Dabbahu and Gabho volcanoes at the northern end of the segment, where an explosive fissural eruption occurred on 26 September 2005. Although comparable in magnitude to the ten year (1975-84) Krafla events in Iceland, seismic data suggest that most of the Dabbahu dyke intrusion occurred in less than a week. Thus, magma intrusion via dyking, rather than segmented normal faulting, maintains and probably initiated the along-axis segmentation along this sector of the Nubia-Arabia plate boundary.
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Touj, Sara; Houle, Sébastien; Ramla, Djamel; Jeffrey-Gauthier, Renaud; Hotta, Harumi; Bronchti, Gilles; Martinoli, Maria-Grazia; Piché, Mathieu
2017-06-03
Chronic pain is associated with autonomic disturbance. However, specific effects of chronic back pain on sympathetic regulation remain unknown. Chronic pain is also associated with structural changes in the anterior cingulate cortex (ACC), which may be linked to sympathetic dysregulation. The aim of this study was to determine whether sympathetic regulation and ACC surface and volume are affected in a rat model of chronic back pain, in which complete Freund Adjuvant (CFA) is injected in back muscles. Sympathetic regulation was assessed with renal blood flow (RBF) changes induced by electrical stimulation of a hind paw, while ACC structure was examined by measuring cortical surface and volume. RBF changes and ACC volume were compared between control rats and rats injected with CFA in back muscles segmental (T10) to renal sympathetic innervation or not (T2). In rats with CFA, chronic inflammation was observed in the affected muscles in addition to increased nuclear factor-kappa B (NF-kB) protein expression in corresponding spinal cord segments (p=0.01) as well as decreased ACC volume (p<0.05). In addition, intensity-dependent decreases in RBF during hind paw stimulation were attenuated by chronic pain at T2 (p's<0.05) and T10 (p's<0.05), but less so at T10 compared with T2 (p's<0.05). These results indicate that chronic back pain alters sympathetic functions through non-segmental mechanisms, possibly by altering descending regulatory pathways from ACC. Yet, segmental somato-sympathetic reflexes may compete with non-segmental processes depending on the back region affected by pain and according to the segmental organization of the sympathetic nervous system. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soultan, D; Murphy, J; James, C
2015-06-15
Purpose: To assess the accuracy of internal target volume (ITV) segmentation of lung tumors for treatment planning of simultaneous integrated boost (SIB) radiotherapy as seen in 4D PET/CT images, using a novel 3D-printed phantom. Methods: The insert mimics high PET tracer uptake in the core and 50% uptake in the periphery, by using a porous design at the periphery. A lung phantom with the insert was placed on a programmable moving platform. Seven breathing waveforms of ideal and patient-specific respiratory motion patterns were fed to the platform, and 4D PET/CT scans were acquired of each of them. CT images weremore » binned into 10 phases, and PET images were binned into 5 phases following the clinical protocol. Two scenarios were investigated for segmentation: a gate 30–70 window, and no gating. The radiation oncologist contoured the outer ITV of the porous insert with on CT images, while the internal void volume with 100% uptake was contoured on PET images for being indistinguishable from the outer volume in CT images. Segmented ITVs were compared to the expected volumes based on known target size and motion. Results: 3 ideal breathing patterns, 2 regular-breathing patient waveforms, and 2 irregular-breathing patient waveforms were used for this study. 18F-FDG was used as the PET tracer. The segmented ITVs from CT closely matched the expected motion for both no gating and gate 30–70 window, with disagreement of contoured ITV with respect to the expected volume not exceeding 13%. PET contours were seen to overestimate volumes in all the cases, up to more than 40%. Conclusion: 4DPET images of a novel 3D printed phantom designed to mimic different uptake values were obtained. 4DPET contours overestimated ITV volumes in all cases, while 4DCT contours matched expected ITV volume values. Investigation of the cause and effects of the discrepancies is undergoing.« less
A combined learning algorithm for prostate segmentation on 3D CT images.
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Schuster, David M; Fei, Baowei
2017-11-01
Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate. © 2017 American Association of Physicists in Medicine.
Bigger is better! Hippocampal volume and declarative memory performance in healthy young men.
Pohlack, Sebastian T; Meyer, Patric; Cacciaglia, Raffaele; Liebscher, Claudia; Ridder, Stephanie; Flor, Herta
2014-01-01
The importance of the hippocampus for declarative memory processes is firmly established. Nevertheless, the issue of a correlation between declarative memory performance and hippocampal volume in healthy subjects still remains controversial. The aim of the present study was to investigate this relationship in more detail. For this purpose, 50 healthy young male participants performed the California Verbal Learning Test. Hippocampal volume was assessed by manual segmentation of high-resolution 3D magnetic resonance images. We found a significant positive correlation between putatively hippocampus-dependent memory measures like short-delay retention, long-delay retention and discriminability and percent hippocampal volume. No significant correlation with measures related to executive processes was found. In addition, percent amygdala volume was not related to any of these measures. Our data advance previous findings reported in studies of brain-damaged individuals in a large and homogeneous young healthy sample and are important for theories on the neural basis of episodic memory.
Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea
2015-12-21
PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δφ = 0.3 ± 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66 ± 0.04), Positive Predictive Value (PPV = 0.81 ± 0.06) and Sensitivity (Sen. = 0.49 ± 0.05). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40 ± 30, DSC = 0.71 ± 0.07 and PPV = 0.90 ± 0.13). High accuracy in target tracking position (ΔME) was obtained for experimental and clinical data (ΔME(exp) = 0 ± 3 mm; ΔME(clin) 0.3 ± 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements, make this algorithm a useful tool for 4D-PET based volume definition for radiotherapy planning of lung cancer and may help to improve the reproducibility in PET quantification for therapy response assessment and prognosis.
NASA Astrophysics Data System (ADS)
Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea
2015-12-01
PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66+/- 0.04 ), Positive Predictive Value (PPV = 0.81+/- 0.06 ) and Sensitivity (Sen. = 0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40+/- 30 , DSC = 0.71+/- 0.07 and PPV = 0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements, make this algorithm a useful tool for 4D-PET based volume definition for radiotherapy planning of lung cancer and may help to improve the reproducibility in PET quantification for therapy response assessment and prognosis.
Automated segmentation of serous pigment epithelium detachment in SD-OCT images
NASA Astrophysics Data System (ADS)
Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian
2015-03-01
Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.
An automatic brain tumor segmentation tool.
Diaz, Idanis; Boulanger, Pierre; Greiner, Russell; Hoehn, Bret; Rowe, Lindsay; Murtha, Albert
2013-01-01
This paper introduces an automatic brain tumor segmentation method (ABTS) for segmenting multiple components of brain tumor using four magnetic resonance image modalities. ABTS's four stages involve automatic histogram multi-thresholding and morphological operations including geodesic dilation. Our empirical results, on 16 real tumors, show that ABTS works very effectively, achieving a Dice accuracy compared to expert segmentation of 81% in segmenting edema and 85% in segmenting gross tumor volume (GTV).
NASA Astrophysics Data System (ADS)
Li, Xiaobing; Qiu, Tianshuang; Lebonvallet, Stephane; Ruan, Su
2010-02-01
This paper presents a brain tumor segmentation method which automatically segments tumors from human brain MRI image volume. The presented model is based on the symmetry of human brain and level set method. Firstly, the midsagittal plane of an MRI volume is searched, the slices with potential tumor of the volume are checked out according to their symmetries, and an initial boundary of the tumor in the slice, in which the tumor is in the largest size, is determined meanwhile by watershed and morphological algorithms; Secondly, the level set method is applied to the initial boundary to drive the curve evolving and stopping to the appropriate tumor boundary; Lastly, the tumor boundary is projected one by one to its adjacent slices as initial boundaries through the volume for the whole tumor. The experiment results are compared with hand tracking of the expert and show relatively good accordance between both.
Automatic liver volume segmentation and fibrosis classification
NASA Astrophysics Data System (ADS)
Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit
2018-02-01
In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Jinghao; Kim, Sung; Jabbour, Salma
2010-03-15
Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less
NASA Astrophysics Data System (ADS)
Hori, Yasuaki; Yasuno, Yoshiaki; Sakai, Shingo; Matsumoto, Masayuki; Sugawara, Tomoko; Madjarova, Violeta; Yamanari, Masahiro; Makita, Shuichi; Yasui, Takeshi; Araki, Tsutomu; Itoh, Masahide; Yatagai, Toyohiko
2006-03-01
A set of fully automated algorithms that is specialized for analyzing a three-dimensional optical coherence tomography (OCT) volume of human skin is reported. The algorithm set first determines the skin surface of the OCT volume, and a depth-oriented algorithm provides the mean epidermal thickness, distribution map of the epidermis, and a segmented volume of the epidermis. Subsequently, an en face shadowgram is produced by an algorithm to visualize the infundibula in the skin with high contrast. The population and occupation ratio of the infundibula are provided by a histogram-based thresholding algorithm and a distance mapping algorithm. En face OCT slices at constant depths from the sample surface are extracted, and the histogram-based thresholding algorithm is again applied to these slices, yielding a three-dimensional segmented volume of the infundibula. The dermal attenuation coefficient is also calculated from the OCT volume in order to evaluate the skin texture. The algorithm set examines swept-source OCT volumes of the skins of several volunteers, and the results show the high stability, portability and reproducibility of the algorithm.
Singh, Ranjodh; Zhou, Zhiping; Tisnado, Jamie; Haque, Sofia; Peck, Kyung K.; Young, Robert J.; Tsiouris, Apostolos John; Thakur, Sunitha B.; Souweidane, Mark M.
2017-01-01
OBJECTIVE Accurately determining diffuse intrinsic pontine glioma (DIPG) tumor volume is clinically important. The aims of the current study were to 1) measure DIPG volumes using methods that require different degrees of subjective judgment; and 2) evaluate interobserver agreement of measurements made using these methods. METHODS Eight patients from a Phase I clinical trial testing convection-enhanced delivery (CED) of a therapeutic antibody were included in the study. Pre-CED, post–radiation therapy axial T2-weighted images were analyzed using 2 methods requiring high degrees of subjective judgment (picture archiving and communication system [PACS] polygon and Volume Viewer auto-contour methods) and 1 method requiring a low degree of subjective judgment (k-means clustering segmentation) to determine tumor volumes. Lin’s concordance correlation coefficients (CCCs) were calculated to assess interobserver agreement. RESULTS The CCCs of measurements made by 2 observers with the PACS polygon and the Volume Viewer auto-contour methods were 0.9465 (lower 1-sided 95% confidence limit 0.8472) and 0.7514 (lower 1-sided 95% confidence limit 0.3143), respectively. Both were considered poor agreement. The CCC of measurements made using k-means clustering segmentation was 0.9938 (lower 1-sided 95% confidence limit 0.9772), which was considered substantial strength of agreement. CONCLUSIONS The poor interobserver agreement of PACS polygon and Volume Viewer auto-contour methods high-lighted the difficulty in consistently measuring DIPG tumor volumes using methods requiring high degrees of subjective judgment. k-means clustering segmentation, which requires a low degree of subjective judgment, showed better interob-server agreement and produced tumor volumes with delineated borders. PMID:27391980
Parmar, Chintan; Blezek, Daniel; Estepar, Raul San Jose; Pieper, Steve; Kim, John; Aerts, Hugo J. W. L.
2017-01-01
Purpose Accurate segmentation of lung nodules is crucial in the development of imaging biomarkers for predicting malignancy of the nodules. Manual segmentation is time consuming and affected by inter-observer variability. We evaluated the robustness and accuracy of a publically available semiautomatic segmentation algorithm that is implemented in the 3D Slicer Chest Imaging Platform (CIP) and compared it with the performance of manual segmentation. Methods CT images of 354 manually segmented nodules were downloaded from the LIDC database. Four radiologists performed the manual segmentation and assessed various nodule characteristics. The semiautomatic CIP segmentation was initialized using the centroid of the manual segmentations, thereby generating four contours for each nodule. The robustness of both segmentation methods was assessed using the region of uncertainty (δ) and Dice similarity index (DSI). The robustness of the segmentation methods was compared using the Wilcoxon-signed rank test (pWilcoxon<0.05). The Dice similarity index (DSIAgree) between the manual and CIP segmentations was computed to estimate the accuracy of the semiautomatic contours. Results The median computational time of the CIP segmentation was 10 s. The median CIP and manually segmented volumes were 477 ml and 309 ml, respectively. CIP segmentations were significantly more robust than manual segmentations (median δCIP = 14ml, median dsiCIP = 99% vs. median δmanual = 222ml, median dsimanual = 82%) with pWilcoxon~10−16. The agreement between CIP and manual segmentations had a median DSIAgree of 60%. While 13% (47/354) of the nodules did not require any manual adjustment, minor to substantial manual adjustments were needed for 87% (305/354) of the nodules. CIP segmentations were observed to perform poorly (median DSIAgree≈50%) for non-/sub-solid nodules with subtle appearances and poorly defined boundaries. Conclusion Semi-automatic CIP segmentation can potentially reduce the physician workload for 13% of nodules owing to its computational efficiency and superior stability compared to manual segmentation. Although manual adjustment is needed for many cases, CIP segmentation provides a preliminary contour for physicians as a starting point. PMID:28594880
Xu, Yupeng; Yan, Ke; Kim, Jinman; Wang, Xiuying; Li, Changyang; Su, Li; Yu, Suqin; Xu, Xun; Feng, Dagan David
2017-01-01
Worldwide, polypoidal choroidal vasculopathy (PCV) is a common vision-threatening exudative maculopathy, and pigment epithelium detachment (PED) is an important clinical characteristic. Thus, precise and efficient PED segmentation is necessary for PCV clinical diagnosis and treatment. We propose a dual-stage learning framework via deep neural networks (DNN) for automated PED segmentation in PCV patients to avoid issues associated with manual PED segmentation (subjectivity, manual segmentation errors, and high time consumption).The optical coherence tomography scans of fifty patients were quantitatively evaluated with different algorithms and clinicians. Dual-stage DNN outperformed existing PED segmentation methods for all segmentation accuracy parameters, including true positive volume fraction (85.74 ± 8.69%), dice similarity coefficient (85.69 ± 8.08%), positive predictive value (86.02 ± 8.99%) and false positive volume fraction (0.38 ± 0.18%). Dual-stage DNN achieves accurate PED quantitative information, works with multiple types of PEDs and agrees well with manual delineation, suggesting that it is a potential automated assistant for PCV management. PMID:28966847
Xu, Yupeng; Yan, Ke; Kim, Jinman; Wang, Xiuying; Li, Changyang; Su, Li; Yu, Suqin; Xu, Xun; Feng, Dagan David
2017-09-01
Worldwide, polypoidal choroidal vasculopathy (PCV) is a common vision-threatening exudative maculopathy, and pigment epithelium detachment (PED) is an important clinical characteristic. Thus, precise and efficient PED segmentation is necessary for PCV clinical diagnosis and treatment. We propose a dual-stage learning framework via deep neural networks (DNN) for automated PED segmentation in PCV patients to avoid issues associated with manual PED segmentation (subjectivity, manual segmentation errors, and high time consumption).The optical coherence tomography scans of fifty patients were quantitatively evaluated with different algorithms and clinicians. Dual-stage DNN outperformed existing PED segmentation methods for all segmentation accuracy parameters, including true positive volume fraction (85.74 ± 8.69%), dice similarity coefficient (85.69 ± 8.08%), positive predictive value (86.02 ± 8.99%) and false positive volume fraction (0.38 ± 0.18%). Dual-stage DNN achieves accurate PED quantitative information, works with multiple types of PEDs and agrees well with manual delineation, suggesting that it is a potential automated assistant for PCV management.
Quantifying Mesoscale Neuroanatomy Using X-Ray Microtomography
Gray Roncal, William; Prasad, Judy A.; Fernandes, Hugo L.; Gürsoy, Doga; De Andrade, Vincent; Fezzaa, Kamel; Xiao, Xianghui; Vogelstein, Joshua T.; Jacobsen, Chris; Körding, Konrad P.
2017-01-01
Methods for resolving the three-dimensional (3D) microstructure of the brain typically start by thinly slicing and staining the brain, followed by imaging numerous individual sections with visible light photons or electrons. In contrast, X-rays can be used to image thick samples, providing a rapid approach for producing large 3D brain maps without sectioning. Here we demonstrate the use of synchrotron X-ray microtomography (µCT) for producing mesoscale (∼1 µm 3 resolution) brain maps from millimeter-scale volumes of mouse brain. We introduce a pipeline for µCT-based brain mapping that develops and integrates methods for sample preparation, imaging, and automated segmentation of cells, blood vessels, and myelinated axons, in addition to statistical analyses of these brain structures. Our results demonstrate that X-ray tomography achieves rapid quantification of large brain volumes, complementing other brain mapping and connectomics efforts. PMID:29085899
Guo, Ting; Winterburn, Julie L; Pipitone, Jon; Duerden, Emma G; Park, Min Tae M; Chau, Vann; Poskitt, Kenneth J; Grunau, Ruth E; Synnes, Anne; Miller, Steven P; Mallar Chakravarty, M
2015-01-01
The hippocampus, a medial temporal lobe structure central to learning and memory, is particularly vulnerable in preterm-born neonates. To date, segmentation of the hippocampus for preterm-born neonates has not yet been performed early-in-life (shortly after birth when clinically stable). The present study focuses on the development and validation of an automatic segmentation protocol that is based on the MAGeT-Brain (Multiple Automatically Generated Templates) algorithm to delineate the hippocampi of preterm neonates on their brain MRIs acquired at not only term-equivalent age but also early-in-life. First, we present a three-step manual segmentation protocol to delineate the hippocampus for preterm neonates and apply this protocol on 22 early-in-life and 22 term images. These manual segmentations are considered the gold standard in assessing the automatic segmentations. MAGeT-Brain, automatic hippocampal segmentation pipeline, requires only a small number of input atlases and reduces the registration and resampling errors by employing an intermediate template library. We assess the segmentation accuracy of MAGeT-Brain in three validation studies, evaluate the hippocampal growth from early-in-life to term-equivalent age, and study the effect of preterm birth on the hippocampal volume. The first experiment thoroughly validates MAGeT-Brain segmentation in three sets of 10-fold Monte Carlo cross-validation (MCCV) analyses with 187 different groups of input atlases and templates. The second experiment segments the neonatal hippocampi on 168 early-in-life and 154 term images and evaluates the hippocampal growth rate of 125 infants from early-in-life to term-equivalent age. The third experiment analyzes the effect of gestational age (GA) at birth on the average hippocampal volume at early-in-life and term-equivalent age using linear regression. The final segmentations demonstrate that MAGeT-Brain consistently provides accurate segmentations in comparison to manually derived gold standards (mean Dice's Kappa > 0.79 and Euclidean distance <1.3 mm between centroids). Using this method, we demonstrate that the average volume of the hippocampus is significantly different (p < 0.0001) in early-in-life (621.8 mm(3)) and term-equivalent age (958.8 mm(3)). Using these differences, we generalize the hippocampal growth rate to 38.3 ± 11.7 mm(3)/week and 40.5 ± 12.9 mm(3)/week for the left and right hippocampi respectively. Not surprisingly, younger gestational age at birth is associated with smaller volumes of the hippocampi (p = 0.001). MAGeT-Brain is capable of segmenting hippocampi accurately in preterm neonates, even at early-in-life. Hippocampal asymmetry with a larger right side is demonstrated on early-in-life images, suggesting that this phenomenon has its onset in the 3rd trimester of gestation. Hippocampal volume assessed at the time of early-in-life and term-equivalent age is linearly associated with GA at birth, whereby smaller volumes are associated with earlier birth.
Guo, Ting; Winterburn, Julie L.; Pipitone, Jon; Duerden, Emma G.; Park, Min Tae M.; Chau, Vann; Poskitt, Kenneth J.; Grunau, Ruth E.; Synnes, Anne; Miller, Steven P.; Mallar Chakravarty, M.
2015-01-01
Introduction The hippocampus, a medial temporal lobe structure central to learning and memory, is particularly vulnerable in preterm-born neonates. To date, segmentation of the hippocampus for preterm-born neonates has not yet been performed early-in-life (shortly after birth when clinically stable). The present study focuses on the development and validation of an automatic segmentation protocol that is based on the MAGeT-Brain (Multiple Automatically Generated Templates) algorithm to delineate the hippocampi of preterm neonates on their brain MRIs acquired at not only term-equivalent age but also early-in-life. Methods First, we present a three-step manual segmentation protocol to delineate the hippocampus for preterm neonates and apply this protocol on 22 early-in-life and 22 term images. These manual segmentations are considered the gold standard in assessing the automatic segmentations. MAGeT-Brain, automatic hippocampal segmentation pipeline, requires only a small number of input atlases and reduces the registration and resampling errors by employing an intermediate template library. We assess the segmentation accuracy of MAGeT-Brain in three validation studies, evaluate the hippocampal growth from early-in-life to term-equivalent age, and study the effect of preterm birth on the hippocampal volume. The first experiment thoroughly validates MAGeT-Brain segmentation in three sets of 10-fold Monte Carlo cross-validation (MCCV) analyses with 187 different groups of input atlases and templates. The second experiment segments the neonatal hippocampi on 168 early-in-life and 154 term images and evaluates the hippocampal growth rate of 125 infants from early-in-life to term-equivalent age. The third experiment analyzes the effect of gestational age (GA) at birth on the average hippocampal volume at early-in-life and term-equivalent age using linear regression. Results The final segmentations demonstrate that MAGeT-Brain consistently provides accurate segmentations in comparison to manually derived gold standards (mean Dice's Kappa > 0.79 and Euclidean distance <1.3 mm between centroids). Using this method, we demonstrate that the average volume of the hippocampus is significantly different (p < 0.0001) in early-in-life (621.8 mm3) and term-equivalent age (958.8 mm3). Using these differences, we generalize the hippocampal growth rate to 38.3 ± 11.7 mm3/week and 40.5 ± 12.9 mm3/week for the left and right hippocampi respectively. Not surprisingly, younger gestational age at birth is associated with smaller volumes of the hippocampi (p = 0.001). Conclusions MAGeT-Brain is capable of segmenting hippocampi accurately in preterm neonates, even at early-in-life. Hippocampal asymmetry with a larger right side is demonstrated on early-in-life images, suggesting that this phenomenon has its onset in the 3rd trimester of gestation. Hippocampal volume assessed at the time of early-in-life and term-equivalent age is linearly associated with GA at birth, whereby smaller volumes are associated with earlier birth. PMID:26740912
Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2017-02-01
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A
2017-02-11
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Hanken, Henning; Schablowsky, Clemens; Smeets, Ralf; Heiland, Max; Sehner, Susanne; Riecke, Björn; Nourwali, Ibrahim; Vorwig, Oliver; Gröbe, Alexander; Al-Dam, Ahmed
2015-04-01
The reconstruction of large facial bony defects using microvascular transplants requires extensive surgery to achieve full rehabilitation of form and function. The purpose of this study is to measure the agreement between virtual plans and the actual results of maxillofacial reconstruction. This retrospective cohort study included 30 subjects receiving maxillofacial reconstruction with a preoperative virtual planning. Parameters including defect size, position, angle and volume of the transplanted segments were compared between the virtual plan and the real outcome using paired t test. A total of 63 bone segments were transplanted. The mean differences between the virtual planning and the postoperative situation were for the defect sizes 1.17 mm (95 % confidence interval (CI) (-.21 to 2.56 mm); p = 0.094), for the resection planes 1.69 mm (95 % CI (1.26-2.11); p = 0.033) and 10.16° (95 % CI (8.36°-11.96°); p < 0.001) and for the planes of the donor segments 10.81° (95 % CI (9.44°-12.17°); p < 0.001) The orientation of the segments differed by 6.68° (95 % CI (5.7°-7.66°); p < 0.001) from the virtual plan; the length of the segments differed by -0.12 mm (95 % CI (0.89-0.65 mm); not significant (n.s.)), respectively, while the volume differed by 73.3 % (95 % CI (69.4-77.6 %); p < 0.001). The distance between the transplanted segments and the remaining bone was 1.49 mm (95 % CI (1.24-1.74); p < 0.001) and between the segments 1.49 mm (95 % CI (1.16-1.81); p < 0.001). Virtual plans for mandibular and maxillofacial reconstruction can be realised with excellent match. These highly satisfactory postoperative results are the basis for an optimal functional and aesthetic reconstruction in a single surgical procedure. The technique should be further investigated in larger study populations and should be further improved.
Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography
Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji
2013-01-01
OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
Blood-threshold CMR volume analysis of functional univentricular heart.
Secchi, Francesco; Alì, Marco; Petrini, Marcello; Pluchinotta, Francesca Romana; Cozzi, Andrea; Carminati, Mario; Sardanelli, Francesco
2018-05-01
To validate a blood-threshold (BT) segmentation software for cardiac magnetic resonance (CMR) cine images in patients with functional univentricular heart (FUH). We evaluated retrospectively 44 FUH patients aged 25 ± 8 years (mean ± standard deviation). For each patient, the epicardial contour of the single ventricle was manually segmented on cine images by two readers and an automated BT algorithm was independently applied to calculate end-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), ejection fraction (EF), and cardiac mass (CM). Aortic flow analysis (AFA) was performed on through-plane images to obtain forward volumes and used as a benchmark. Reproducibility was tested in a subgroup of 24 randomly selected patients. Wilcoxon, Spearman, and Bland-Altman statistics were used. No significant difference was found between SV (median 57.7 ml; interquartile range 47.9-75.6) and aortic forward flow (57.4 ml; 48.9-80.4) (p = 0.123), with a high correlation (r = 0.789, p < 0.001). Intra-reader reproducibility was 86% for SV segmentation, and 96% for AFA. Inter-reader reproducibility was 85 and 96%, respectively. The BT segmentation provided an accurate and reproducible assessment of heart function in FUH patients.
Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B
2012-01-01
The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.
Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest
NASA Astrophysics Data System (ADS)
Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David
2009-02-01
Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.
Multiplexed aberration measurement for deep tissue imaging in vivo
Wang, Chen; Liu, Rui; Milkie, Daniel E.; Sun, Wenzhi; Tan, Zhongchao; Kerlin, Aaron; Chen, Tsai-Wen; Kim, Douglas S.; Ji, Na
2014-01-01
We describe a multiplexed aberration measurement method that modulates the intensity or phase of light rays at multiple pupil segments in parallel to determine their phase gradients. Applicable to fluorescent-protein-labeled structures of arbitrary complexity, it allows us to obtain diffraction-limited resolution in various samples in vivo. For the strongly scattering mouse brain, a single aberration correction improves structural and functional imaging of fine neuronal processes over a large imaging volume. PMID:25128976
CTC Sentinel. January 2008, Volume 1 Issue 2. A Profile of Tehrik-i-Taliban Pakistan
2008-01-01
virgins of paradise. Even senior Muntada al-Ansar administrators contributed eulogies in honor of Rahman, such as the notorious Saif al-Islam al...by large segments of the population. Gujarat has contributed further to Muslim alienation within India.17 One of India’s premier security...to commemorate a national holiday in central Baghdad were targeted by a suicide bomber, leaving nine people dead. – Reuters, January 6 January 7
Automatic segmentation of brain MRIs and mapping neuroanatomy across the human lifespan
NASA Astrophysics Data System (ADS)
Keihaninejad, Shiva; Heckemann, Rolf A.; Gousias, Ioannis S.; Rueckert, Daniel; Aljabar, Paul; Hajnal, Joseph V.; Hammers, Alexander
2009-02-01
A robust model for the automatic segmentation of human brain images into anatomically defined regions across the human lifespan would be highly desirable, but such structural segmentations of brain MRI are challenging due to age-related changes. We have developed a new method, based on established algorithms for automatic segmentation of young adults' brains. We used prior information from 30 anatomical atlases, which had been manually segmented into 83 anatomical structures. Target MRIs came from 80 subjects (~12 individuals/decade) from 20 to 90 years, with equal numbers of men, women; data from two different scanners (1.5T, 3T), using the IXI database. Each of the adult atlases was registered to each target MR image. By using additional information from segmentation into tissue classes (GM, WM and CSF) to initialise the warping based on label consistency similarity before feeding this into the previous normalised mutual information non-rigid registration, the registration became robust enough to accommodate atrophy and ventricular enlargement with age. The final segmentation was obtained by combination of the 30 propagated atlases using decision fusion. Kernel smoothing was used for modelling the structural volume changes with aging. Example linear correlation coefficients with age were, for lateral ventricular volume, rmale=0.76, rfemale=0.58 and, for hippocampal volume, rmale=-0.6, rfemale=-0.4 (allρ<0.01).
Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B.; Torres, Vicente E.; Yu, Alan S.L.; Mrug, Michal; Bennett, William M.; Flessner, Michael F.; Landsittel, Doug P.
2016-01-01
Background and objectives Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Design, setting, participants, & measurements Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2–weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Results Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. Conclusions We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. PMID:26797708
Kim, Youngwoo; Ge, Yinghui; Tao, Cheng; Zhu, Jianbing; Chapman, Arlene B; Torres, Vicente E; Yu, Alan S L; Mrug, Michal; Bennett, William M; Flessner, Michael F; Landsittel, Doug P; Bae, Kyongtae T
2016-04-07
Our study developed a fully automated method for segmentation and volumetric measurements of kidneys from magnetic resonance images in patients with autosomal dominant polycystic kidney disease and assessed the performance of the automated method with the reference manual segmentation method. Study patients were selected from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease. At the enrollment of the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease Study in 2000, patients with autosomal dominant polycystic kidney disease were between 15 and 46 years of age with relatively preserved GFRs. Our fully automated segmentation method was on the basis of a spatial prior probability map of the location of kidneys in abdominal magnetic resonance images and regional mapping with total variation regularization and propagated shape constraints that were formulated into a level set framework. T2-weighted magnetic resonance image sets of 120 kidneys were selected from 60 patients with autosomal dominant polycystic kidney disease and divided into the training and test datasets. The performance of the automated method in reference to the manual method was assessed by means of two metrics: Dice similarity coefficient and intraclass correlation coefficient of segmented kidney volume. The training and test sets were swapped for crossvalidation and reanalyzed. Successful segmentation of kidneys was performed with the automated method in all test patients. The segmented kidney volumes ranged from 177.2 to 2634 ml (mean, 885.4±569.7 ml). The mean Dice similarity coefficient ±SD between the automated and manual methods was 0.88±0.08. The mean correlation coefficient between the two segmentation methods for the segmented volume measurements was 0.97 (P<0.001 for each crossvalidation set). The results from the crossvalidation sets were highly comparable. We have developed a fully automated method for segmentation of kidneys from abdominal magnetic resonance images in patients with autosomal dominant polycystic kidney disease with varying kidney volumes. The performance of the automated method was in good agreement with that of manual method. Copyright © 2016 by the American Society of Nephrology.
Ahn, S J; Suh, S H; Lee, K-Y; Kim, J H; Seo, K-D; Lee, S
2015-11-01
Fluid-attenuated inversion recovery hyperintense vessels in stroke represent leptomeningeal collateral flow. We presumed that FLAIR hyperintense vessels would be more closely associated with arterial stenosis and perfusion abnormality in ischemic stroke on T2-PROPELLER-FLAIR than on T2-FLAIR. We retrospectively reviewed 35 patients with middle cerebral territorial infarction who underwent MR imaging. FLAIR hyperintense vessel scores were graded according to the number of segments with FLAIR hyperintense vessels in the MCA ASPECTS areas. We compared the predictability of FLAIR hyperintense vessels between T2-PROPELLER-FLAIR and T2-FLAIR for large-artery stenosis. The interagreement between perfusion abnormality and FLAIR hyperintense vessels was assessed. In subgroup analysis (9 patients with MCA horizontal segment occlusion), the association of FLAIR hyperintense vessels with ischemic lesion volume and perfusion abnormality volume was evaluated. FLAIR hyperintense vessel scores were significantly higher on T2-PROPELLER-FLAIR than on T2-FLAIR (3.50 ± 2.79 versus 1.21 ± 1.47, P < .01), and the sensitivity for large-artery stenosis was significantly improved on T2-PROPELLER-FLAIR (93% versus 68%, P = .03). FLAIR hyperintense vessels on T2-PROPELLER-FLAIR were more closely associated with perfusion abnormalities than they were on T2-FLAIR (κ = 0.64 and κ = 0.27, respectively). In subgroup analysis, FLAIR hyperintense vessels were positively correlated with ischemic lesion volume on T2-FLAIR, while the mismatch of FLAIR hyperintense vessels between the 2 sequences was negatively correlated with ischemic lesion volume (P = .01). In MCA stroke, FLAIR hyperintense vessels were more prominent on T2-PROPELLER-FLAIR compared with T2-FLAIR. In addition, FLAIR hyperintense vessels on T2-PROPELLER-FLAIR have a significantly higher sensitivity for predicting large-artery stenosis than they do on T2-FLAIR. Moreover, the areas showing FLAIR hyperintense vessels on T2-PROPELLER-FLAIR were more closely associated with perfusion abnormality than those on T2-FLAIR. © 2015 by American Journal of Neuroradiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofeng; Wu, Ning; Cheng, Guanghui
Purpose: To develop an automated magnetic resonance imaging (MRI) parotid segmentation method to monitor radiation-induced parotid gland changes in patients after head and neck radiation therapy (RT). Methods and Materials: The proposed method combines the atlas registration method, which captures the global variation of anatomy, with a machine learning technology, which captures the local statistical features, to automatically segment the parotid glands from the MRIs. The segmentation method consists of 3 major steps. First, an atlas (pre-RT MRI and manually contoured parotid gland mask) is built for each patient. A hybrid deformable image registration is used to map the pre-RTmore » MRI to the post-RT MRI, and the transformation is applied to the pre-RT parotid volume. Second, the kernel support vector machine (SVM) is trained with the subject-specific atlas pair consisting of multiple features (intensity, gradient, and others) from the aligned pre-RT MRI and the transformed parotid volume. Third, the well-trained kernel SVM is used to differentiate the parotid from surrounding tissues in the post-RT MRIs by statistically matching multiple texture features. A longitudinal study of 15 patients undergoing head and neck RT was conducted: baseline MRI was acquired prior to RT, and the post-RT MRIs were acquired at 3-, 6-, and 12-month follow-up examinations. The resulting segmentations were compared with the physicians' manual contours. Results: Successful parotid segmentation was achieved for all 15 patients (42 post-RT MRIs). The average percentage of volume differences between the automated segmentations and those of the physicians' manual contours were 7.98% for the left parotid and 8.12% for the right parotid. The average volume overlap was 91.1% ± 1.6% for the left parotid and 90.5% ± 2.4% for the right parotid. The parotid gland volume reduction at follow-up was 25% at 3 months, 27% at 6 months, and 16% at 12 months. Conclusions: We have validated our automated parotid segmentation algorithm in a longitudinal study. This segmentation method may be useful in future studies to address radiation-induced xerostomia in head and neck radiation therapy.« less
Marko, John F
2009-05-01
The Gauss linking number (Ca) of two flexible polymer rings which are tethered to one another is investigated. For ideal random walks, mean linking-squared varies with the square root of polymer length while for self-avoiding walks, linking-squared increases logarithmically with polymer length. The free-energy cost of linking of polymer rings is therefore strongly dependent on degree of self-avoidance, i.e., on intersegment excluded volume. Scaling arguments and numerical data are used to determine the free-energy cost of fixed linking number in both the fluctuation and large-Ca regimes; for ideal random walks, for |Ca|>N;{1/4} , the free energy of catenation is found to grow proportional, variant|Ca/N;{1/4}|;{4/3} . When excluded volume interactions between segments are present, the free energy rapidly approaches a linear dependence on Gauss linking (dF/dCa approximately 3.7k_{B}T) , suggestive of a novel "catenation condensation" effect. These results are used to show that condensation of long entangled polymers along their length, so as to increase excluded volume while decreasing number of statistical segments, can drive disentanglement if a mechanism is present to permit topology change. For chromosomal DNA molecules, lengthwise condensation is therefore an effective means to bias topoisomerases to eliminate catenations between replicated chromatids. The results for mean-square catenation are also used to provide a simple approximate estimate for the "knotting length," or number of segments required to have a knot along a single circular polymer, explaining why the knotting length ranges from approximately 300 for an ideal random walk to 10;{6} for a self-avoiding walk.
Applications of tuned mass dampers to improve performance of large space mirrors
NASA Astrophysics Data System (ADS)
Yingling, Adam J.; Agrawal, Brij N.
2014-01-01
In order for future imaging spacecraft to meet higher resolution imaging capability, it will be necessary to build large space telescopes with primary mirror diameters that range from 10 m to 20 m and do so with nanometer surface accuracy. Due to launch vehicle mass and volume constraints, these mirrors have to be deployable and lightweight, such as segmented mirrors using active optics to correct mirror surfaces with closed loop control. As a part of this work, system identification tests revealed that dynamic disturbances inherent in a laboratory environment are significant enough to degrade the optical performance of the telescope. Research was performed at the Naval Postgraduate School to identify the vibration modes most affecting the optical performance and evaluate different techniques to increase damping of those modes. Based on this work, tuned mass dampers (TMDs) were selected because of their simplicity in implementation and effectiveness in targeting specific modes. The selected damping mechanism was an eddy current damper where the damping and frequency of the damper could be easily changed. System identification of segments was performed to derive TMD specifications. Several configurations of the damper were evaluated, including the number and placement of TMDs, damping constant, and targeted structural modes. The final configuration consisted of two dampers located at the edge of each segment and resulted in 80% reduction in vibrations. The WFE for the system without dampers was 1.5 waves, with one TMD the WFE was 0.9 waves, and with two TMDs the WFE was 0.25 waves. This paper provides details of some of the work done in this area and includes theoretical predictions for optimum damping which were experimentally verified on a large aperture segmented system.
López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A
2018-05-01
Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.
GPU-based relative fuzzy connectedness image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.
2013-01-15
Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzymore » connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.« less
Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F
2007-01-01
Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.
Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies
NASA Astrophysics Data System (ADS)
Yang, Jun
2000-12-01
Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.
Motion-aware stroke volume quantification in 4D PC-MRI data of the human aorta.
Köhler, Benjamin; Preim, Uta; Grothoff, Matthias; Gutberlet, Matthias; Fischbach, Katharina; Preim, Bernhard
2016-02-01
4D PC-MRI enables the noninvasive measurement of time-resolved, three-dimensional blood flow data that allow quantification of the hemodynamics. Stroke volumes are essential to assess the cardiac function and evolution of different cardiovascular diseases. The calculation depends on the wall position and vessel orientation, which both change during the cardiac cycle due to the heart muscle contraction and the pumped blood. However, current systems for the quantitative 4D PC-MRI data analysis neglect the dynamic character and instead employ a static 3D vessel approximation. We quantify differences between stroke volumes in the aorta obtained with and without consideration of its dynamics. We describe a method that uses the approximating 3D segmentation to automatically initialize segmentation algorithms that require regions inside and outside the vessel for each temporal position. This enables the use of graph cuts to obtain 4D segmentations, extract vessel surfaces including centerlines for each temporal position and derive motion information. The stroke volume quantification is compared using measuring planes in static (3D) vessels, planes with fixed angulation inside dynamic vessels (this corresponds to the common 2D PC-MRI) and moving planes inside dynamic vessels. Seven datasets with different pathologies such as aneurysms and coarctations were evaluated in close collaboration with radiologists. Compared to the experts' manual stroke volume estimations, motion-aware quantification performs, on average, 1.57% better than calculations without motion consideration. The mean difference between stroke volumes obtained with the different methods is 7.82%. Automatically obtained 4D segmentations overlap by 85.75% with manually generated ones. Incorporating motion information in the stroke volume quantification yields slight but not statistically significant improvements. The presented method is feasible for the clinical routine, since computation times are low and essential parts run fully automatically. The 4D segmentations can be used for other algorithms as well. The simultaneous visualization and quantification may support the understanding and interpretation of cardiac blood flow.
Molinari, Francesco; Pirronti, Tommaso; Sverzellati, Nicola; Diciotti, Stefano; Amato, Michele; Paolantonio, Guglielmo; Gentile, Luigia; Parapatt, George K; D'Argento, Francesco; Kuhnigk, Jan-Martin
2013-01-01
We aimed to compare the intra- and interoperator variability of lobar volumetry and emphysema scores obtained by semi-automated and manual segmentation techniques in lung emphysema patients. In two sessions held three months apart, two operators performed lobar volumetry of unenhanced chest computed tomography examinations of 47 consecutive patients with chronic obstructive pulmonary disease and lung emphysema. Both operators used the manual and semi-automated segmentation techniques. The intra- and interoperator variability of the volumes and emphysema scores obtained by semi-automated segmentation was compared with the variability obtained by manual segmentation of the five pulmonary lobes. The intra- and interoperator variability of the lobar volumes decreased when using semi-automated lobe segmentation (coefficients of repeatability for the first operator: right upper lobe, 147 vs. 96.3; right middle lobe, 137.7 vs. 73.4; right lower lobe, 89.2 vs. 42.4; left upper lobe, 262.2 vs. 54.8; and left lower lobe, 260.5 vs. 56.5; coefficients of repeatability for the second operator: right upper lobe, 61.4 vs. 48.1; right middle lobe, 56 vs. 46.4; right lower lobe, 26.9 vs. 16.7; left upper lobe, 61.4 vs. 27; and left lower lobe, 63.6 vs. 27.5; coefficients of reproducibility in the interoperator analysis: right upper lobe, 191.3 vs. 102.9; right middle lobe, 219.8 vs. 126.5; right lower lobe, 122.6 vs. 90.1; left upper lobe, 166.9 vs. 68.7; and left lower lobe, 168.7 vs. 71.6). The coefficients of repeatability and reproducibility of emphysema scores also decreased when using semi-automated segmentation and had ranges that varied depending on the target lobe and selected threshold of emphysema. Semi-automated segmentation reduces the intra- and interoperator variability of lobar volumetry and provides a more objective tool than manual technique for quantifying lung volumes and severity of emphysema.
Automatic segmentation and volumetry of multiple sclerosis brain lesions from MR images
Jain, Saurabh; Sima, Diana M.; Ribbens, Annemie; Cambron, Melissa; Maertens, Anke; Van Hecke, Wim; De Mey, Johan; Barkhof, Frederik; Steenwijk, Martijn D.; Daams, Marita; Maes, Frederik; Van Huffel, Sabine; Vrenken, Hugo; Smeets, Dirk
2015-01-01
The location and extent of white matter lesions on magnetic resonance imaging (MRI) are important criteria for diagnosis, follow-up and prognosis of multiple sclerosis (MS). Clinical trials have shown that quantitative values, such as lesion volumes, are meaningful in MS prognosis. Manual lesion delineation for the segmentation of lesions is, however, time-consuming and suffers from observer variability. In this paper, we propose MSmetrix, an accurate and reliable automatic method for lesion segmentation based on MRI, independent of scanner or acquisition protocol and without requiring any training data. In MSmetrix, 3D T1-weighted and FLAIR MR images are used in a probabilistic model to detect white matter (WM) lesions as an outlier to normal brain while segmenting the brain tissue into grey matter, WM and cerebrospinal fluid. The actual lesion segmentation is performed based on prior knowledge about the location (within WM) and the appearance (hyperintense on FLAIR) of lesions. The accuracy of MSmetrix is evaluated by comparing its output with expert reference segmentations of 20 MRI datasets of MS patients. Spatial overlap (Dice) between the MSmetrix and the expert lesion segmentation is 0.67 ± 0.11. The intraclass correlation coefficient (ICC) equals 0.8 indicating a good volumetric agreement between the MSmetrix and expert labelling. The reproducibility of MSmetrix' lesion volumes is evaluated based on 10 MS patients, scanned twice with a short interval on three different scanners. The agreement between the first and the second scan on each scanner is evaluated through the spatial overlap and absolute lesion volume difference between them. The spatial overlap was 0.69 ± 0.14 and absolute total lesion volume difference between the two scans was 0.54 ± 0.58 ml. Finally, the accuracy and reproducibility of MSmetrix compare favourably with other publicly available MS lesion segmentation algorithms, applied on the same data using default parameter settings. PMID:26106562
Semiautomatic segmentation of liver metastases on volumetric CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng, E-mail: bz2166@cumc.columbia.edu
2015-11-15
Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accuratelymore » delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation method.« less
NASA Astrophysics Data System (ADS)
Wang, Jui-Kai; Kardon, Randy H.; Garvin, Mona K.
2015-03-01
In cases of optic-nerve-head edema, the presence of the swelling reduces the visibility of the underlying neural canal opening (NCO) within spectral-domain optical coherence tomography (SD-OCT) volumes. Consequently, traditional SD-OCT-based NCO segmentation methods often overestimate the size of the NCO. The visibility of the NCO can be improved using high-definition 2D raster scans, but such scans do not provide 3D contextual image information. In this work, we present a semi-automated approach for the segmentation of the NCO in cases of optic disc edema by combining image information from volumetric and high-definition raster SD-OCT image sequences. In particular, for each subject, five high-definition OCT B-scans and the OCT volume are first separately segmented, and then the five high-definition B-scans are automatically registered to the OCT volume. Next, six NCO points are placed (manually, in this work) in the central three high-definition OCT B-scans (two points for each central B-scans) and are automatically transferred into the OCT volume. Utilizing a combination of these mapped points and the 3D image information from the volumetric scans, a graph-based approach is used to identify the complete NCO on the OCT en-face image. The segmented NCO points using the new approach were significantly closer to expert-marked points than the segmented NCO points using a traditional approach (root mean square differences in pixels: 5.34 vs. 21.71, p < 0.001).
Automatic 3D liver location and segmentation via convolutional neural network and graph cut.
Lu, Fang; Wu, Fa; Hu, Peijun; Peng, Zhiyi; Kong, Dexing
2017-02-01
Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.
Barba-J, Leiner; Escalante-Ramírez, Boris; Vallejo Venegas, Enrique; Arámbula Cosío, Fernando
2018-05-01
Analysis of cardiac images is a fundamental task to diagnose heart problems. Left ventricle (LV) is one of the most important heart structures used for cardiac evaluation. In this work, we propose a novel 3D hierarchical multiscale segmentation method based on a local active contour (AC) model and the Hermite transform (HT) for LV analysis in cardiac magnetic resonance (MR) and computed tomography (CT) volumes in short axis view. Features such as directional edges, texture, and intensities are analyzed using the multiscale HT space. A local AC model is configured using the HT coefficients and geometrical constraints. The endocardial and epicardial boundaries are used for evaluation. Segmentation of the endocardium is controlled using elliptical shape constraints. The final endocardial shape is used to define the geometrical constraints for segmentation of the epicardium. We follow the assumption that epicardial and endocardial shapes are similar in volumes with short axis view. An initialization scheme based on a fuzzy C-means algorithm and mathematical morphology was designed. The algorithm performance was evaluated using cardiac MR and CT volumes in short axis view demonstrating the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Jang, Yujin; Hong, Helen; Chung, Jin Wook; Yoon, Young Ho
2012-02-01
We propose an effective technique for the extraction of liver boundary based on multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images. Our method is composed of four main steps. First, for extracting an optimal volume circumscribing a liver, lower and side boundaries are defined by positional information of pelvis and rib. An upper boundary is defined by separating the lungs and heart from CT images. Second, for extracting an initial liver volume, optimal liver volume is smoothed by anisotropic diffusion filtering and is segmented using adaptively selected threshold value. Third, for removing neighbor organs from initial liver volume, morphological opening and connected component labeling are applied to multiple planes. Finally, for refining the liver boundaries, deformable surface model is applied to a posterior liver surface and missing left robe in previous step. Then, probability summation map is generated by calculating regional information of the segmented liver in coronal plane, which is used for restoring the inaccurate liver boundaries. Experimental results show that our segmentation method can accurately extract liver boundaries without leakage to neighbor organs in spite of various liver shape and ambiguous boundary.
Integrating segmentation methods from the Insight Toolkit into a visualization application.
Martin, Ken; Ibáñez, Luis; Avila, Lisa; Barré, Sébastien; Kaspersen, Jon H
2005-12-01
The Insight Toolkit (ITK) initiative from the National Library of Medicine has provided a suite of state-of-the-art segmentation and registration algorithms ideally suited to volume visualization and analysis. A volume visualization application that effectively utilizes these algorithms provides many benefits: it allows access to ITK functionality for non-programmers, it creates a vehicle for sharing and comparing segmentation techniques, and it serves as a visual debugger for algorithm developers. This paper describes the integration of image processing functionalities provided by the ITK into VolView, a visualization application for high performance volume rendering. A free version of this visualization application is publicly available and is available in the online version of this paper. The process for developing ITK plugins for VolView according to the publicly available API is described in detail, and an application of ITK VolView plugins to the segmentation of Abdominal Aortic Aneurysms (AAAs) is presented. The source code of the ITK plugins is also publicly available and it is included in the online version.
Selecting exposure measures in crash rate prediction for two-lane highway segments.
Qin, Xiao; Ivan, John N; Ravishanker, Nalini
2004-03-01
A critical part of any risk assessment is identifying how to represent exposure to the risk involved. Recent research shows that the relationship between crash count and traffic volume is non-linear; consequently, a simple crash rate computed as the ratio of crash count to volume is not proper for comparing the safety of sites with different traffic volumes. To solve this problem, we describe a new approach for relating traffic volume and crash incidence. Specifically, we disaggregate crashes into four types: (1) single-vehicle, (2) multi-vehicle same direction, (3) multi-vehicle opposite direction, and (4) multi-vehicle intersecting, and define candidate exposure measures for each that we hypothesize will be linear with respect to each crash type. This paper describes initial investigation using crash and physical characteristics data for highway segments in Michigan from the Highway Safety Information System (HSIS). We use zero-inflated-Poisson (ZIP) modeling to estimate models for predicting counts for each of the above crash types as a function of the daily volume, segment length, speed limit and roadway width. We found that the relationship between crashes and the daily volume (AADT) is non-linear and varies by crash type, and is significantly different from the relationship between crashes and segment length for all crash types. Our research will provide information to improve accuracy of crash predictions and, thus, facilitate more meaningful comparison of the safety record of seemingly similar highway locations.
Garteiser, Philippe; Doblas, Sabrina; Towner, Rheal A; Griffin, Timothy M
2013-11-01
To use an automated water-suppressed magnetic resonance imaging (MRI) method to objectively assess adipose tissue (AT) volumes in whole body and specific regional body components (subcutaneous, thoracic and peritoneal) of obese and lean mice. Water-suppressed MR images were obtained on a 7T, horizontal-bore MRI system in whole bodies (excluding head) of 26 week old male C57BL6J mice fed a control (10% kcal fat) or high-fat diet (60% kcal fat) for 20 weeks. Manual (outlined regions) versus automated (Gaussian fitting applied to threshold-weighted images) segmentation procedures were compared for whole body AT and regional AT volumes (i.e., subcutaneous, thoracic, and peritoneal). The AT automated segmentation method was compared to dual-energy X-ray (DXA) analysis. The average AT volumes for whole body and individual compartments correlated well between the manual outlining and the automated methods (R2>0.77, p<0.05). Subcutaneous, peritoneal, and total body AT volumes were increased 2-3 fold and thoracic AT volume increased more than 5-fold in diet-induced obese mice versus controls (p<0.05). MRI and DXA-based method comparisons were highly correlative (R2=0.94, p<0.0001). Automated AT segmentation of water-suppressed MRI data using a global Gaussian filtering algorithm resulted in a fairly accurate assessment of total and regional AT volumes in a pre-clinical mouse model of obesity. © 2013 Elsevier Inc. All rights reserved.
CT-based manual segmentation and evaluation of paranasal sinuses.
Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G
2009-04-01
Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.
Automated tumor volumetry using computer-aided image segmentation.
Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos
2015-05-01
Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Automated Tumor Volumetry Using Computer-Aided Image Segmentation
Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos
2015-01-01
Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633
3D tumor measurement in cone-beam CT breast imaging
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Ning, Ruola
2004-05-01
Cone-beam CT breast imaging provides a digital volume representation of a breast. With a digital breast volume, the immediate task is to extract the breast tissue information, especially for suspicious tumors, preferably in an automatic manner or with minimal user interaction. This paper reports a program for three-dimensional breast tissue analysis. It consists of volumetric segmentation (by globally thresholding), subsegmentation (connection-based separation), and volumetric component measurement (volume, surface, shape, and other geometrical specifications). A combination scheme of multi-thresholding and binary volume morphology is proposed to fast determine the surface gradients, which may be interpreted as the surface evolution (outward growth or inward shrinkage) for a tumor volume. This scheme is also used to optimize the volumetric segmentation. With a binary volume, we decompose the foreground into components according to spatial connectedness. Since this decomposition procedure is performed after volumetric segmentation, it is called subsegmentation. The subsegmentation brings the convenience for component visualization and measurement, in the whole support space, without interference from others. Upon the tumor component identification, we measure the following specifications: volume, surface area, roundness, elongation, aspect, star-shapedness, and location (centroid). A 3D morphological operation is used to extract the cluster shell and, by delineating the corresponding volume from the grayscale volume, to measure the shell stiffness. This 3D tissue measurement is demonstrated with a tumor-borne breast specimen (a surgical part).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeswijk, Miriam M. van; Department of Surgery, Maastricht University Medical Centre, Maastricht; Lambregts, Doenja M.J., E-mail: d.lambregts@nki.nl
Purpose: Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Methods and Materials: Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained bymore » method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Results: Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and 60 to 67 seconds (post-CRT) for the semiautomated method, and 180 to 296 seconds (pre-CRT) and 84 to 91 seconds (post-CRT) for the manual method. Conclusions: DWI volumetry using a semiautomated segmentation approach is promising and a potentially time-saving alternative to manual tumor delineation, particularly for primary tumor volumetry. Once further optimized, it could be a helpful tool for tumor response assessment in rectal cancer.« less
van Heeswijk, Miriam M; Lambregts, Doenja M J; van Griethuysen, Joost J M; Oei, Stanley; Rao, Sheng-Xiang; de Graaff, Carla A M; Vliegen, Roy F A; Beets, Geerard L; Papanikolaou, Nikos; Beets-Tan, Regina G H
2016-03-15
Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained by method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and 60 to 67 seconds (post-CRT) for the semiautomated method, and 180 to 296 seconds (pre-CRT) and 84 to 91 seconds (post-CRT) for the manual method. DWI volumetry using a semiautomated segmentation approach is promising and a potentially time-saving alternative to manual tumor delineation, particularly for primary tumor volumetry. Once further optimized, it could be a helpful tool for tumor response assessment in rectal cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
Age-Related Differences and Heritability of the Perisylvian Language Networks.
Budisavljevic, Sanja; Dell'Acqua, Flavio; Rijsdijk, Frühling V; Kane, Fergus; Picchioni, Marco; McGuire, Philip; Toulopoulou, Timothea; Georgiades, Anna; Kalidindi, Sridevi; Kravariti, Eugenia; Murray, Robin M; Murphy, Declan G; Craig, Michael C; Catani, Marco
2015-09-16
Acquisition of language skills depends on the progressive maturation of specialized brain networks that are usually lateralized in adult population. However, how genetic and environmental factors relate to the age-related differences in lateralization of these language pathways is still not known. We recruited 101 healthy right-handed subjects aged 9-40 years to investigate age-related differences in the anatomy of perisylvian language pathways and 86 adult twins (52 monozygotic and 34 dizygotic) to understand how heritability factors influence language anatomy. Diffusion tractography was used to dissect and extract indirect volume measures from the three segments of the arcuate fasciculus connecting Wernicke's to Broca's region (i.e., long segment), Broca's to Geschwind's region (i.e., anterior segment), and Wernicke's to Geschwind's region (i.e., posterior segment). We found that the long and anterior arcuate segments are lateralized before adolescence and their lateralization remains stable throughout adolescence and early adulthood. Conversely, the posterior segment shows right lateralization in childhood but becomes progressively bilateral during adolescence, driven by a reduction in volume in the right hemisphere. Analysis of the twin sample showed that genetic and shared environmental factors influence the anatomy of those segments that lateralize earlier, whereas specific environmental effects drive the variability in the volume of the posterior segment that continues to change in adolescence and adulthood. Our results suggest that the age-related differences in the lateralization of the language perisylvian pathways are related to the relative contribution of genetic and environmental effects specific to each segment. Our study shows that, by early childhood, frontotemporal (long segment) and frontoparietal (anterior segment) connections of the arcuate fasciculus are left and right lateralized, respectively, and remain lateralized throughout adolescence and early adulthood. In contrast, temporoparietal (posterior segment) connections are right lateralized in childhood, but become progressively bilateral during adolescence. Preliminary twin analysis suggested that lateralization of the arcuate fasciculus is a heterogeneous process that depends on the interplay between genetic and environment factors specific to each segment. Tracts that exhibit higher age effects later in life (i.e., posterior segment) appear to be influenced more by specific environmental factors. Copyright © 2015 Budisavljevic et al.
Age-Related Differences and Heritability of the Perisylvian Language Networks
Dell'Acqua, Flavio; Rijsdijk, Frühling V.; Kane, Fergus; Picchioni, Marco; McGuire, Philip; Toulopoulou, Timothea; Georgiades, Anna; Kalidindi, Sridevi; Kravariti, Eugenia; Murray, Robin M.; Murphy, Declan G.; Craig, Michael C.
2015-01-01
Acquisition of language skills depends on the progressive maturation of specialized brain networks that are usually lateralized in adult population. However, how genetic and environmental factors relate to the age-related differences in lateralization of these language pathways is still not known. We recruited 101 healthy right-handed subjects aged 9–40 years to investigate age-related differences in the anatomy of perisylvian language pathways and 86 adult twins (52 monozygotic and 34 dizygotic) to understand how heritability factors influence language anatomy. Diffusion tractography was used to dissect and extract indirect volume measures from the three segments of the arcuate fasciculus connecting Wernicke's to Broca's region (i.e., long segment), Broca's to Geschwind's region (i.e., anterior segment), and Wernicke's to Geschwind's region (i.e., posterior segment). We found that the long and anterior arcuate segments are lateralized before adolescence and their lateralization remains stable throughout adolescence and early adulthood. Conversely, the posterior segment shows right lateralization in childhood but becomes progressively bilateral during adolescence, driven by a reduction in volume in the right hemisphere. Analysis of the twin sample showed that genetic and shared environmental factors influence the anatomy of those segments that lateralize earlier, whereas specific environmental effects drive the variability in the volume of the posterior segment that continues to change in adolescence and adulthood. Our results suggest that the age-related differences in the lateralization of the language perisylvian pathways are related to the relative contribution of genetic and environmental effects specific to each segment. SIGNIFICANCE STATEMENT Our study shows that, by early childhood, frontotemporal (long segment) and frontoparietal (anterior segment) connections of the arcuate fasciculus are left and right lateralized, respectively, and remain lateralized throughout adolescence and early adulthood. In contrast, temporoparietal (posterior segment) connections are right lateralized in childhood, but become progressively bilateral during adolescence. Preliminary twin analysis suggested that lateralization of the arcuate fasciculus is a heterogeneous process that depends on the interplay between genetic and environment factors specific to each segment. Tracts that exhibit higher age effects later in life (i.e., posterior segment) appear to be influenced more by specific environmental factors. PMID:26377454
Characteristics of a dynamic holographic sensor for shape control of a large reflector
NASA Technical Reports Server (NTRS)
Welch, Sharon S.; Cox, David E.
1991-01-01
Design of a distributed holographic interferometric sensor for measuring the surface displacement of a large segmented reflector is proposed. The reflector's surface is illuminated by laser light of two wavelengths and volume holographic gratings are formed in photorefractive crystals of the wavefront returned from the surface. The sensor is based on holographic contouring with a multiple frequency source. It is shown that the most stringent requirement of temporal stability affects both the temporal resolution and the dynamic range. Principal factor which limit the sensor performance include the response time of photorefractive crystal, laser power required to write a hologram, and the size of photorefractive crystal.
Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.
Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R
2012-06-01
The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.
Automated detection and segmentation of follicles in 3D ultrasound for assisted reproduction
NASA Astrophysics Data System (ADS)
Narayan, Nikhil S.; Sivanandan, Srinivasan; Kudavelly, Srinivas; Patwardhan, Kedar A.; Ramaraju, G. A.
2018-02-01
Follicle quantification refers to the computation of the number and size of follicles in 3D ultrasound volumes of the ovary. This is one of the key factors in determining hormonal dosage during female infertility treatments. In this paper, we propose an automated algorithm to detect and segment follicles in 3D ultrasound volumes of the ovary for quantification. In a first of its kind attempt, we employ noise-robust phase symmetry feature maps as likelihood function to perform mean-shift based follicle center detection. Max-flow algorithm is used for segmentation and gray weighted distance transform is employed for post-processing the results. We have obtained state-of-the-art results with a true positive detection rate of >90% on 26 3D volumes with 323 follicles.
Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi
2015-01-01
Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359
NASA Astrophysics Data System (ADS)
Heydarian, Mohammadreza; Kirby, Miranda; Wheatley, Andrew; Fenster, Aaron; Parraga, Grace
2012-03-01
A semi-automated method for generating hyperpolarized helium-3 (3He) measurements of individual slice (2D) or whole lung (3D) gas distribution was developed. 3He MRI functional images were segmented using two-dimensional (2D) and three-dimensional (3D) hierarchical K-means clustering of the 3He MRI signal and in addition a seeded region-growing algorithm was employed for segmentation of the 1H MRI thoracic cavity volume. 3He MRI pulmonary function measurements were generated following two-dimensional landmark-based non-rigid registration of the 3He and 1H pulmonary images. We applied this method to MRI of healthy subjects and subjects with chronic obstructive lung disease (COPD). The results of hierarchical K-means 2D and 3D segmentation were compared to an expert observer's manual segmentation results using linear regression, Pearson correlations and the Dice similarity coefficient. 2D hierarchical K-means segmentation of ventilation volume (VV) and ventilation defect volume (VDV) was strongly and significantly correlated with manual measurements (VV: r=0.98, p<.0001 VDV: r=0.97, p<.0001) and mean Dice coefficients were greater than 92% for all subjects. 3D hierarchical K-means segmentation of VV and VDV was also strongly and significantly correlated with manual measurements (VV: r=0.98, p<.0001 VDV: r=0.64, p<.0001) and the mean Dice coefficients were greater than 91% for all subjects. Both 2D and 3D semi-automated segmentation of 3He MRI gas distribution provides a way to generate novel pulmonary function measurements.
NASA Astrophysics Data System (ADS)
Lenkiewicz, Przemyslaw; Pereira, Manuela; Freire, Mário M.; Fernandes, José
2013-12-01
In this article, we propose a novel image segmentation method called the whole mesh deformation (WMD) model, which aims at addressing the problems of modern medical imaging. Such problems have raised from the combination of several factors: (1) significant growth of medical image volumes sizes due to increasing capabilities of medical acquisition devices; (2) the will to increase the complexity of image processing algorithms in order to explore new functionality; (3) change in processor development and turn towards multi processing units instead of growing bus speeds and the number of operations per second of a single processing unit. Our solution is based on the concept of deformable models and is characterized by a very effective and precise segmentation capability. The proposed WMD model uses a volumetric mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times, independently of image contents. The model also offers a good ability for topology changes and allows effective parallelization of workflow, which makes it a very good choice for large datasets. We present a precise model description, followed by experiments on artificial images and real medical data.
Reducing Actuator Requirements in Continuum Robots Through Optimized Cable Routing.
Case, Jennifer C; White, Edward L; SunSpiral, Vytas; Kramer-Bottiglio, Rebecca
2018-02-01
Continuum manipulators offer many advantages compared to their rigid-linked counterparts, such as increased degrees of freedom and workspace volume. Inspired by biological systems, such as elephant trunks and octopus tentacles, many continuum manipulators are made of multiple segments that allow large-scale deformations to be distributed throughout the body. Most continuum manipulators currently control each segment individually. For example, a planar cable-driven system is typically controlled by a pair of cables for each segment, which implies two actuators per segment. In this article, we demonstrate how highly coupled crossing cable configurations can reduce both actuator count and actuator torque requirements in a planar continuum manipulator, while maintaining workspace reachability and manipulability. We achieve highly coupled actuation by allowing cables to cross through the manipulator to create new cable configurations. We further derive an analytical model to predict the underactuated manipulator workspace and experimentally verify the model accuracy with a physical system. We use this model to compare crossing cable configurations to the traditional cable configuration using workspace performance metrics. Our work here focuses on a simplified planar robot, both in simulation and in hardware, with the goal of extending this to spiraling-cable configurations on full 3D continuum robots in future work.
Rios Piedra, Edgar A; Taira, Ricky K; El-Saden, Suzie; Ellingson, Benjamin M; Bui, Alex A T; Hsu, William
2016-02-01
Brain tumor analysis is moving towards volumetric assessment of magnetic resonance imaging (MRI), providing a more precise description of disease progression to better inform clinical decision-making and treatment planning. While a multitude of segmentation approaches exist, inherent variability in the results of these algorithms may incorrectly indicate changes in tumor volume. In this work, we present a systematic approach to characterize variability in tumor boundaries that utilizes equivalence tests as a means to determine whether a tumor volume has significantly changed over time. To demonstrate these concepts, 32 MRI studies from 8 patients were segmented using four different approaches (statistical classifier, region-based, edge-based, knowledge-based) to generate different regions of interest representing tumor extent. We showed that across all studies, the average Dice coefficient for the superset of the different methods was 0.754 (95% confidence interval 0.701-0.808) when compared to a reference standard. We illustrate how variability obtained by different segmentations can be used to identify significant changes in tumor volume between sequential time points. Our study demonstrates that variability is an inherent part of interpreting tumor segmentation results and should be considered as part of the interpretation process.
Wang, Jinke; Guo, Haoyan
2016-01-01
This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.
Hoyng, Lieke L; Frings, Virginie; Hoekstra, Otto S; Kenny, Laura M; Aboagye, Eric O; Boellaard, Ronald
2015-01-01
Positron emission tomography (PET) with (18)F-3'-deoxy-3'-fluorothymidine ([(18)F]FLT) can be used to assess tumour proliferation. A kinetic-filtering (KF) classification algorithm has been suggested for segmentation of tumours in dynamic [(18)F]FLT PET data. The aim of the present study was to evaluate KF segmentation and its test-retest performance in [(18)F]FLT PET in non-small cell lung cancer (NSCLC) patients. Nine NSCLC patients underwent two 60-min dynamic [(18)F]FLT PET scans within 7 days prior to treatment. Dynamic scans were reconstructed with filtered back projection (FBP) as well as with ordered subsets expectation maximisation (OSEM). Twenty-eight lesions were identified by an experienced physician. Segmentation was performed using KF applied to the dynamic data set and a source-to-background corrected 50% threshold (A50%) was applied to the sum image of the last three frames (45- to 60-min p.i.). Furthermore, several adaptations of KF were tested. Both for KF and A50% test-retest (TRT) variability of metabolically active tumour volume and standard uptake value (SUV) were evaluated. KF performed better on OSEM- than on FBP-reconstructed PET images. The original KF implementation segmented 15 out of 28 lesions, whereas A50% segmented each lesion. Adapted KF versions, however, were able to segment 26 out of 28 lesions. In the best performing adapted versions, metabolically active tumour volume and SUV TRT variability was similar to those of A50%. KF misclassified certain tumour areas as vertebrae or liver tissue, which was shown to be related to heterogeneous [(18)F]FLT uptake areas within the tumour. For [(18)F]FLT PET studies in NSCLC patients, KF and A50% show comparable tumour volume segmentation performance. The KF method needs, however, a site-specific optimisation. The A50% is therefore a good alternative for tumour segmentation in NSCLC [(18)F]FLT PET studies in multicentre studies. Yet, it was observed that KF has the potential to subsegment lesions in high and low proliferative areas.
Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging
NASA Astrophysics Data System (ADS)
Orologas, F.; Saitis, P.; Kallergi, M.
2017-11-01
Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer’s general analysis software at much lower cost. Relatively simple processing techniques could lead to customized, unsupervised or partially supervised methods that can successfully perform the desirable analysis and adapt to the specific disease requirements.
A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.
Pandis, Petros; Bull, Anthony Mj
2017-11-01
Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.
Nielsen, Flemming K; Egund, Niels; Jørgensen, Anette; Peters, David A; Jurik, Anne Grethe
2016-11-16
Bone marrow lesions (BMLs) in knee osteoarthritis (OA) can be assessed using fluid sensitive and contrast enhanced sequences. The association between BMLs and symptoms has been investigated in several studies but only using fluid sensitive sequences. Our aims were to assess BMLs by contrast enhanced MRI sequences in comparison with a fluid sensitive STIR sequence using two different segmentation methods and to analyze the association between the MR findings and disability and pain. Twenty-two patients (mean age 61 years, range 41-79 years) with medial femoro-tibial knee OA obtained MRI and filled out a WOMAC questionnaire at baseline and follow-up (median interval of 334 days). STIR, dynamic contrast enhanced-MRI (DCE-MRI) and fat saturated T1 post-contrast (T1 CE FS) MRI sequences were obtained. All STIR and T1 CE FS sequences were assessed independently by two readers for STIR-BMLs and contrast enhancing areas of BMLs (CEA-BMLs) using manual segmentation and computer assisted segmentation, and the measurements were compared. DCE-MRIs were assessed for the relative distribution of voxels with an inflammatory enhancement pattern, N voxel , in the bone marrow. All findings were compared to WOMAC scores, including pain and overall symptoms, and changes from baseline to follow-up were analyzed. The average volume of CEA-BML was smaller than the STIR-BML volume by manual segmentation. The opposite was found for computer assisted segmentation where the average CEA-BML volume was larger than the STIR-BML volume. The contradictory finding by computer assisted segmentation was partly caused by a number of outliers with an apparent generally increased signal intensity in the anterior parts of the femoral condyle and tibial plateau causing an overestimation of the CEA-BML volume. Both CEA-BML, STIR-BML and N voxel were significantly correlated with symptoms and to a similar degree. A significant reduction in total WOMAC score was seen at follow-up, but no significant changes were observed for either CEA-BML, STIR-BML or N voxel . Neither the degree nor the volume of contrast enhancement in BMLs seems to add any clinical information compared to BMLs visualized by fluid sensitive sequences. Manual segmentation may be needed to obtain valid CEA-BML measurements.
Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M
2013-01-01
Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78%]) and the radiologist 52% (95% CI: [38%, 66%]). OASIS obtains the estimated probability for each voxel to be part of a lesion by weighting each imaging modality with coefficient weights. These coefficients are explicit, obtained using standard model fitting techniques, and can be reused in other imaging studies. This fully automated method allows sensitive and specific detection of lesion presence and may be rapidly applied to large collections of images.
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
Bridging stylized facts in finance and data non-stationarities
NASA Astrophysics Data System (ADS)
Camargo, Sabrina; Duarte Queirós, Sílvio M.; Anteneodo, Celia
2013-04-01
Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary paths of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of statistical features of these two quantities, which are often named stylized facts, namely the tails of the distribution of trading volume and price fluctuations and a dynamics compatible with the U-shaped profile of the volume in a trading section and the slow decay of the autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. Assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.
DIVWAG Model Documentation. Volume II. Programmer/Analyst Manual. Part 3. Chapter 9 Through 12.
1976-07-01
entered through a routine, NAM2, that calls the segment controlling routine NBARAS. (4) Segment 3, controlled by the routine NFIRE , simulates round...nuclear fire, NAM calls in sequence the routines NFIRE (segment 3), ASUNIT (segment 2), SASSMT (segment 4), and NFIRE (segment 3). These calls simulate...this is a call to NFIRE (ISEG equals one or two), control goes to block L2. (2) Block 2. If this is to assess a unit passing through a nuclear barrier
Lung lobe modeling and segmentation with individualized surface meshes
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Barschdorf, Hans; von Berg, Jens; Dries, Sebastian; Franz, Astrid; Klinder, Tobias; Lorenz, Cristian; Renisch, Steffen; Wiemker, Rafael
2008-03-01
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely. This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a special fissure feature image, and a performance evaluation over a test data set showing an average segmentation accuracy of 1 to 3 mm.
3D Volumetric Analysis of Fluid Inclusions Using Confocal Microscopy
NASA Astrophysics Data System (ADS)
Proussevitch, A.; Mulukutla, G.; Sahagian, D.; Bodnar, B.
2009-05-01
Fluid inclusions preserve valuable information regarding hydrothermal, metamorphic, and magmatic processes. The molar quantities of liquid and gaseous components in the inclusions can be estimated from their volumetric measurements at room temperatures combined with knowledge of the PVTX properties of the fluid and homogenization temperatures. Thus, accurate measurements of inclusion volumes and their two phase components are critical. One of the greatest advantages of the Laser Scanning Confocal Microscopy (LSCM) in application to fluid inclsion analsyis is that it is affordable for large numbers of samples, given the appropriate software analysis tools and methodology. Our present work is directed toward developing those tools and methods. For the last decade LSCM has been considered as a potential method for inclusion volume measurements. Nevertheless, the adequate and accurate measurement by LSCM has not yet been successful for fluid inclusions containing non-fluorescing fluids due to many technical challenges in image analysis despite the fact that the cost of collecting raw LSCM imagery has dramatically decreased in recent years. These problems mostly relate to image analysis methodology and software tools that are needed for pre-processing and image segmentation, which enable solid, liquid and gaseous components to be delineated. Other challenges involve image quality and contrast, which is controlled by fluorescence of the material (most aqueous fluid inclusions do not fluoresce at the appropriate laser wavelengths), material optical properties, and application of transmitted and/or reflected confocal illumination. In this work we have identified the key problems of image analysis and propose some potential solutions. For instance, we found that better contrast of pseudo-confocal transmitted light images could be overlayed with poor-contrast true-confocal reflected light images within the same stack of z-ordered slices. This approach allows one to narrow the interface boundaries between the phases before the application of segmentation routines. In turn, we found that an active contour segmentation technique works best for these types of geomaterials. The method was developed by adapting a medical software package implemented using the Insight Toolkit (ITK) set of algorithms developed for segmentation of anatomical structures. We have developed a manual analysis procedure with the potential of 2 micron resolution in 3D volume rendering that is specifically designed for application to fluid inclusion volume measurements.
Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred
2014-01-01
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678
Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.
2014-01-01
Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237
Outdoor recreation activity trends by volume segments: U.S. and Northeast market analyses, 1982-1989
Rodney B. Warnick
1992-01-01
The purpose of this review was to examine volume segmentation within three selected outdoor recreational activities -- swimming, hunting and downhill skiing over an eight-year period, from 1982 through 1989 at the national level and within the Northeast Region of the U.S.; and to determine if trend patterns existed within any of these activities when the market size...
A Biostereometric Approach To The Study Of Infants' And Children's Body Growth
NASA Astrophysics Data System (ADS)
Coblentz, A.; Ignazi, G.
1980-07-01
Studies on the somatic growth of young children have traditionally been made using conventional anthropometry techniques. As a result, while the conditions of growth of morphological variables such as weight or segmental dimensions are well known, the same cannot be said of the more global aspect of the development of the body in a three-dimensional reference space. Yet body volumes and surfaces represent morphological characteristics which are just as necessary for a good understanding of physiological phenomena (thermoregulation, energy balance, etc.) as the conventional linear data. In the volume of their research on children's growth in recent years, the authors have found that in none of the studies mentioned in the literature was consideration given to the dynamic aspect of the child's somatic development in a three-dimensional space. A primary reason for such omission is largely to be found in the technical difficulties encountered in the measure-ment of somatic characteristics such as body volume and surface. Yet, among the several possible methods of study, biostereometry and particularly the photogrammetric tool, is certainly one of the most rewarding. This being so, the authors propose to use the photogrammetric technique to undertake, in a first stage, a methodological study that will draw up, on a limited sample of infants and young children, the development chart, over a period of time, of the surfaces and volumes of segmental elements. Thus will be checked the relationships between the growth rates of different characteristics : surfaces, volumes, weight, linear dimensions. Quite apart from the intrinsic value of such studies, the data thus collected will eventually provide practitioners, pediatricians and physiologists with the reference records that have so far been lacking.
Schaefer, Pamela W; Souza, Leticia; Kamalian, Shervin; Hirsch, Joshua A; Yoo, Albert J; Kamalian, Shahmir; Gonzalez, R Gilberto; Lev, Michael H
2015-02-01
Diffusion-weighted imaging (DWI) can reliably identify critically ischemic tissue shortly after stroke onset. We tested whether thresholded computed tomographic cerebral blood flow (CT-CBF) and CT-cerebral blood volume (CT-CBV) maps are sufficiently accurate to substitute for DWI for estimating the critically ischemic tissue volume. Ischemic volumes of 55 patients with acute anterior circulation stroke were assessed on DWI by visual segmentation and on CT-CBF and CT-CBV with segmentation using 15% and 30% thresholds, respectively. The contrast:noise ratios of ischemic regions on the DWI and CT perfusion (CTP) images were measured. Correlation and Bland-Altman analyses were used to assess the reliability of CTP. Mean contrast:noise ratios for DWI, CT-CBF, and CT-CBV were 4.3, 0.9, and 0.4, respectively. CTP and DWI lesion volumes were highly correlated (R(2)=0.87 for CT-CBF; R(2)=0.83 for CT-CBV; P<0.001). Bland-Altman analyses revealed little systemic bias (-2.6 mL) but high measurement variability (95% confidence interval, ±56.7 mL) between mean CT-CBF and DWI lesion volumes, and systemic bias (-26 mL) and high measurement variability (95% confidence interval, ±64.0 mL) between mean CT-CBV and DWI lesion volumes. A simulated treatment study demonstrated that using CTP-CBF instead of DWI for detecting a statistically significant effect would require at least twice as many patients. The poor contrast:noise ratios of CT-CBV and CT-CBF compared with those of DWI result in large measurement error, making it problematic to substitute CTP for DWI in selecting individual acute stroke patients for treatment. CTP could be used for treatment studies of patient groups, but the number of patients needed to identify a significant effect is much higher than the number needed if DWI is used. © 2014 American Heart Association, Inc.
A semi-automatic method for left ventricle volume estimate: an in vivo validation study
NASA Technical Reports Server (NTRS)
Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.
2001-01-01
This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.
Kamson, David O.; Juhász, Csaba; Shin, Joseph; Behen, Michael E.; Guy, William C.; Chugani, Harry T.; Jeong, Jeong-Won
2014-01-01
Background Reorganization of the corticospinal tract (CST) after early damage can limit motor deficit. In this study, we explored patterns of structural CST reorganization in children with Sturge-Weber syndrome. Methods Five children (age 1.5-7 years) with motor deficit due to unilateral Sturge-Weber syndrome were studied prospectively and longitudinally (1-2 years follow-up). CST segments belonging to hand and leg movements were separated, and their volume was measured by diffusion tensor imaging (DTI) tractography using a recently validated method. CST segmental volumes were normalized and compared between the SWS children and age-matched healthy controls. Volume changes during follow-up were also compared to clinical motor symptoms. Results In the SWS children, hand-related (but not leg-related) CST volumes were consistently decreased in the affected cerebral hemisphere at baseline. At follow-up, two distinct patterns of hand CST volume changes emerged: (i) Two children with extensive frontal lobe damage showed a CST volume decrease in the lesional hemisphere and a concomitant increase in the non-lesional (contralateral) hemisphere. These children developed good hand grasp but no fine motor skills. (ii) The three other children, with relative sparing of the frontal lobe, showed an interval increase of the normalized hand CST volume in the affected hemisphere; these children showed no gross motor deficit at follow-up. Conclusions DTI tractography can detect differential abnormalities in the hand CST segment both ipsi- and contralateral to the lesion. Interval increase in the CST hand segment suggests structural reorganization, whose pattern may determine clinical motor outcome and could guide strategies for early motor intervention. PMID:24507695
Kamson, David O; Juhász, Csaba; Shin, Joseph; Behen, Michael E; Guy, William C; Chugani, Harry T; Jeong, Jeong-Won
2014-04-01
Reorganization of the corticospinal tract after early damage can limit motor deficit. In this study, we explored patterns of structural corticospinal tract reorganization in children with Sturge-Weber syndrome. Five children (age 1.5-7 years) with motor deficit resulting from unilateral Sturge-Weber syndrome were studied prospectively and longitudinally (1-2 years follow-up). Corticospinal tract segments belonging to hand and leg movements were separated and their volume was measured by diffusion tensor imaging tractography using a recently validated method. Corticospinal tract segmental volumes were normalized and compared between the Sturge-Weber syndrome children and age-matched healthy controls. Volume changes during follow-up were also compared with clinical motor symptoms. In the Sturge-Weber syndrome children, hand-related (but not leg-related) corticospinal tract volumes were consistently decreased in the affected cerebral hemisphere at baseline. At follow-up, two distinct patterns of hand corticospinal tract volume changes emerged. (1) Two children with extensive frontal lobe damage showed a corticospinal tract volume decrease in the lesional hemisphere and a concomitant increase in the nonlesional (contralateral) hemisphere. These children developed good hand grasp but no fine motor skills. (2) The three other children, with relative sparing of the frontal lobe, showed an interval increase of the normalized hand corticospinal tract volume in the affected hemisphere; these children showed no gross motor deficit at follow-up. Diffusion tensor imaging tractography can detect differential abnormalities in the hand corticospinal tract segment both ipsi- and contralateral to the lesion. Interval increase in the corticospinal tract hand segment suggests structural reorganization, whose pattern may determine clinical motor outcome and could guide strategies for early motor intervention. Copyright © 2014 Elsevier Inc. All rights reserved.
Quantitative Neuroimaging Software for Clinical Assessment of Hippocampal Volumes on MR Imaging
Ahdidan, Jamila; Raji, Cyrus A.; DeYoe, Edgar A.; Mathis, Jedidiah; Noe, Karsten Ø.; Rimestad, Jens; Kjeldsen, Thomas K.; Mosegaard, Jesper; Becker, James T.; Lopez, Oscar
2015-01-01
Background: Multiple neurological disorders including Alzheimer’s disease (AD), mesial temporal sclerosis, and mild traumatic brain injury manifest with volume loss on brain MRI. Subtle volume loss is particularly seen early in AD. While prior research has demonstrated the value of this additional information from quantitative neuroimaging, very few applications have been approved for clinical use. Here we describe a US FDA cleared software program, NeuroreaderTM, for assessment of clinical hippocampal volume on brain MRI. Objective: To present the validation of hippocampal volumetrics on a clinical software program. Method: Subjects were drawn (n = 99) from the Alzheimer Disease Neuroimaging Initiative study. Volumetric brain MR imaging was acquired in both 1.5 T (n = 59) and 3.0 T (n = 40) scanners in participants with manual hippocampal segmentation. Fully automated hippocampal segmentation and measurement was done using a multiple atlas approach. The Dice Similarity Coefficient (DSC) measured the level of spatial overlap between NeuroreaderTM and gold standard manual segmentation from 0 to 1 with 0 denoting no overlap and 1 representing complete agreement. DSC comparisons between 1.5 T and 3.0 T scanners were done using standard independent samples T-tests. Results: In the bilateral hippocampus, mean DSC was 0.87 with a range of 0.78–0.91 (right hippocampus) and 0.76–0.91 (left hippocampus). Automated segmentation agreement with manual segmentation was essentially equivalent at 1.5 T (DSC = 0.879) versus 3.0 T (DSC = 0.872). Conclusion: This work provides a description and validation of a software program that can be applied in measuring hippocampal volume, a biomarker that is frequently abnormal in AD and other neurological disorders. PMID:26484924
Ahlers, C; Simader, C; Geitzenauer, W; Stock, G; Stetson, P; Dastmalchi, S; Schmidt-Erfurth, U
2008-02-01
A limited number of scans compromise conventional optical coherence tomography (OCT) to track chorioretinal disease in its full extension. Failures in edge-detection algorithms falsify the results of retinal mapping even further. High-definition-OCT (HD-OCT) is based on raster scanning and was used to visualise the localisation and volume of intra- and sub-pigment-epithelial (RPE) changes in fibrovascular pigment epithelial detachments (fPED). Two different scanning patterns were evaluated. 22 eyes with fPED were imaged using a frequency-domain, high-speed prototype of the Cirrus HD-OCT. The axial resolution was 6 mum, and the scanning speed was 25 kA scans/s. Two different scanning patterns covering an area of 6 x 6 mm in the macular retina were compared. Three-dimensional topographic reconstructions and volume calculations were performed using MATLAB-based automatic segmentation software. Detailed information about layer-specific distribution of fluid accumulation and volumetric measurements can be obtained for retinal- and sub-RPE volumes. Both raster scans show a high correlation (p<0.01; R2>0.89) of measured values, that is PED volume/area, retinal volume and mean retinal thickness. Quality control of the automatic segmentation revealed reasonable results in over 90% of the examinations. Automatic segmentation allows for detailed quantitative and topographic analysis of the RPE and the overlying retina. In fPED, the 128 x 512 scanning-pattern shows mild advantages when compared with the 256 x 256 scan. Together with the ability for automatic segmentation, HD-OCT clearly improves the clinical monitoring of chorioretinal disease by adding relevant new parameters. HD-OCT is likely capable of enhancing the understanding of pathophysiology and benefits of treatment for current anti-CNV strategies in future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan
Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less
Keller, Simon S; O'Muircheartaigh, Jonathan; Traynor, Catherine; Towgood, Karren; Barker, Gareth J; Richardson, Mark P
2014-02-01
Thalamic abnormality in temporal lobe epilepsy (TLE) is well known from imaging studies, but evidence is lacking regarding connectivity profiles of the thalamus and their involvement in the disease process. We used a novel multisequence magnetic resonance imaging (MRI) protocol to elucidate the relationship between mesial temporal and thalamic pathology in TLE. For 23 patients with TLE and 23 healthy controls, we performed T1 -weighted (for analysis of tissue structure), diffusion tensor imaging (tissue connectivity), and T1 and T2 relaxation (tissue integrity) MRI across the whole brain. We used connectivity-based segmentation to determine connectivity patterns of thalamus to ipsilateral cortical regions (occipital, parietal, prefrontal, postcentral, precentral, and temporal). We subsequently determined volumes, mean tractography streamlines, and mean T1 and T2 relaxometry values for each thalamic segment preferentially connecting to a given cortical region, and of the hippocampus and entorhinal cortex. As expected, patients had significant volume reduction and increased T2 relaxation time in ipsilateral hippocampus and entorhinal cortex. There was bilateral volume loss, mean streamline reduction, and T2 increase of the thalamic segment preferentially connected to temporal lobe, corresponding to anterior, dorsomedial, and pulvinar thalamic regions, with no evidence of significant change in any other thalamic segments. Left and right thalamotemporal segment volume and T2 were significantly correlated with volume and T2 of ipsilateral (epileptogenic), but not contralateral (nonepileptogenic), mesial temporal structures. These convergent and robust data indicate that thalamic abnormality in TLE is restricted to the area of the thalamus that is preferentially connected to the epileptogenic temporal lobe. The degree of thalamic pathology is related to the extent of mesial temporal lobe damage in TLE. © 2014 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan
2013-12-15
Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less
Segmental Interactions between Polymers and Small Molecules in Batteries and Biofuel Purification
NASA Astrophysics Data System (ADS)
Balsara, Nitash
2015-03-01
Polymers such as poly(ethylene oxide) (PEO) and poly(dimethyl siloxane) (PDMS) have the potential to play an important role in the emerging clean energy landscape. Mixtures of PEO and lithium salts are the most widely studied non-flammable electrolyte for rechargeable lithium batteries. PDMS membranes are ideally suited for purifying bioethanol and biobutanol from fermentation broths. The ability of PEO and PDMS to function in these applications depends on segmental interactions between the polymeric host and small molecule guests. One experimental approach for studying these interactions is X-ray absorption spectroscopy (XAS). Models for interpreting XAS spectra of amorphous mixtures and charged species such as salts must quantify the effect of segmental interactions on the electronic structure of the atoms of interest (e.g. sulfur). This combination of experiment and theory is used to determine the species formed in during charging and discharging lithium-sulfur batteries; the theoretical specific energy of lithium-sulfur batteries is a factor of four larger than that of current lithium-ion batteries. Selective transport of alcohols in PDMS-containing membranes is controlled by the size, shape, and connectivity of sub-nanometer cavities or free volume that form and disappear spontaneously as the chain segments undergo Brownian motion. We demonstrate that self-assembly of PDMS-containing block copolymers can be used to control segmental relaxation, which, in turn, affects free volume. Positron annihilation was used to determine the size distribution of free volume cavities in the PDMS-containing block copolymers. The effect of this artificial free volume on selective permeation of alcohols formed by fermentation of sugars derived from lignocellulosic biomass is studied. Molecular dynamics simulations are needed to understand the relationship between self-assembly, free volume, and transport in block copolymers.
Miri, Mohammad Saleh; Abràmoff, Michael D; Kwon, Young H; Sonka, Milan; Garvin, Mona K
2017-07-01
Bruch's membrane opening-minimum rim width (BMO-MRW) is a recently proposed structural parameter which estimates the remaining nerve fiber bundles in the retina and is superior to other conventional structural parameters for diagnosing glaucoma. Measuring this structural parameter requires identification of BMO locations within spectral domain-optical coherence tomography (SD-OCT) volumes. While most automated approaches for segmentation of the BMO either segment the 2D projection of BMO points or identify BMO points in individual B-scans, in this work, we propose a machine-learning graph-based approach for true 3D segmentation of BMO from glaucomatous SD-OCT volumes. The problem is formulated as an optimization problem for finding a 3D path within the SD-OCT volume. In particular, the SD-OCT volumes are transferred to the radial domain where the closed loop BMO points in the original volume form a path within the radial volume. The estimated location of BMO points in 3D are identified by finding the projected location of BMO points using a graph-theoretic approach and mapping the projected locations onto the Bruch's membrane (BM) surface. Dynamic programming is employed in order to find the 3D BMO locations as the minimum-cost path within the volume. In order to compute the cost function needed for finding the minimum-cost path, a random forest classifier is utilized to learn a BMO model, obtained by extracting intensity features from the volumes in the training set, and computing the required 3D cost function. The proposed method is tested on 44 glaucoma patients and evaluated using manual delineations. Results show that the proposed method successfully identifies the 3D BMO locations and has significantly smaller errors compared to the existing 3D BMO identification approaches. Published by Elsevier B.V.
A new fractional order derivative based active contour model for colon wall segmentation
NASA Astrophysics Data System (ADS)
Chen, Bo; Li, Lihong C.; Wang, Huafeng; Wei, Xinzhou; Huang, Shan; Chen, Wensheng; Liang, Zhengrong
2018-02-01
Segmentation of colon wall plays an important role in advancing computed tomographic colonography (CTC) toward a screening modality. Due to the low contrast of CT attenuation around colon wall, accurate segmentation of the boundary of both inner and outer wall is very challenging. In this paper, based on the geodesic active contour model, we develop a new model for colon wall segmentation. First, tagged materials in CTC images were automatically removed via a partial volume (PV) based electronic colon cleansing (ECC) strategy. We then present a new fractional order derivative based active contour model to segment the volumetric colon wall from the cleansed CTC images. In this model, the regionbased Chan-Vese model is incorporated as an energy term to the whole model so that not only edge/gradient information but also region/volume information is taken into account in the segmentation process. Furthermore, a fractional order differentiation derivative energy term is also developed in the new model to preserve the low frequency information and improve the noise immunity of the new segmentation model. The proposed colon wall segmentation approach was validated on 16 patient CTC scans. Experimental results indicate that the present scheme is very promising towards automatically segmenting colon wall, thus facilitating computer aided detection of initial colonic polyp candidates via CTC.
NASA Technical Reports Server (NTRS)
Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.
2014-01-01
A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.
Fabrication, Testing, Coating and Alignment of Fast Segmented Optics
2006-05-25
mirror segment, a 100 mm thick Zerodur mirror blank was purchased from Schott. Figure 2 shows the segment and its support for polishing and testing in...Polishing large off-axis segments of fast primary mirrors 2. Testing large segments in an off-axis geometry 3. Alignment of multiple segments of a large... mirror 4. Coatings that reflect high-intensity light without distorting the substrate These technologies are critical because of several unique
Wu, Xiaofan; Maehara, Akiko; He, Yong; Xu, Kai; Oviedo, Carlos; Witzenbichler, Bernhard; Lansky, Alexandra J; Dressler, Ovidiu; Parise, Helen; Stone, Gregg W; Mintz, Gary S
2013-08-01
Vessel expansion and axial plaque redistribution or distal plaque embolization contribute to the increase in lumen dimensions after stent implantation. Preintervention and postintervention grayscale volumetric intravascular ultrasound was used to study 43 de novo native coronary lesions treated with TAXUS or Express bare metal stents in the HORIZONS-AMI Trial. There was a decrease in lesion segment plaque + media (P + M) volume (-19.5 ± 22.2 mm(3) ) that was associated with a decrease in overall analysis segment (lesion plus 5 mm long proximal and distal reference segments) P + M volume (-17.5 ± 21.0 mm(3) ) that was greater than the shift of plaque from the lesion to the proximal and distal reference segments (1.9 ± 4.5 mm(3) , P < 0.0001). Overall analysis segment P + M volume decreased more in the angiographic thrombus (+) versus the thrombus (-) group (27.4 ± 23.4 vs. -8.9 ± 14.3 mm(3) , P = 0.003), whereas plaque shift to the reference segments showed no significant difference between the two groups (1.5 ± 5.2 vs. 2.3 ± 3.9 mm(3) , P = 0.590). Compared with the angiographic thrombus (-) group, patients in the thrombus (+) group more often developed no reflow (25% vs. 0%, P = 0.012) and had a higher preintervention CK-MB (P = 0.011), postintervention CK-MB (P < 0.001), and periprocedural (post-PCI minus pre-PCI) elevation of CK-MB (P = 0.001). In acute myocardial infarction lesions, there was a marked poststenting reduction in overall plaque volume that was significantly greater in patients with angiographic thrombus than without thrombus and may have explained a greater periprocedural rise in CK-MB. © 2013 Wiley Periodicals, Inc.
The error analysis of Lobular and segmental division of right liver by volume measurement.
Zhang, Jianfei; Lin, Weigang; Chi, Yanyan; Zheng, Nan; Xu, Qiang; Zhang, Guowei; Yu, Shengbo; Li, Chan; Wang, Bin; Sui, Hongjin
2017-07-01
The aim of this study is to explore the inconsistencies between right liver volume as measured by imaging and the actual anatomical appearance of the right lobe. Five healthy donated livers were studied. The liver slices were obtained with hepatic segments multicolor-infused through the portal vein. In the slices, the lobes were divided by two methods: radiological landmarks and real anatomical boundaries. The areas of the right anterior lobe (RAL) and right posterior lobe (RPL) on each slice were measured using Photoshop CS5 and AutoCAD, and the volumes of the two lobes were calculated. There was no statistically significant difference between the volumes of the RAL or RPL as measured by the radiological landmarks (RL) and anatomical boundaries (AB) methods. However, the curves of the square error value of the RAL and RPL measured using CT showed that the three lowest points were at the cranial, intermediate, and caudal levels. The U- or V-shaped curves of the square error rate of the RAL and RPL revealed that the lowest value is at the intermediate level and the highest at the cranial and caudal levels. On CT images, less accurate landmarks were used to divide the RAL and RPL at the cranial and caudal layers. The measured volumes of hepatic segments VIII and VI would be less than their true values, and the measured volumes of hepatic segments VII and V would be greater than their true values, according to radiological landmarks. Clin. Anat. 30:585-590, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi
2016-03-01
This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.
Tracking fuzzy borders using geodesic curves with application to liver segmentation on planning CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Yading, E-mail: yading.yuan@mssm.edu; Chao, Ming; Sheu, Ren-Dih
Purpose: This work aims to develop a robust and efficient method to track the fuzzy borders between liver and the abutted organs where automatic liver segmentation usually suffers, and to investigate its applications in automatic liver segmentation on noncontrast-enhanced planning computed tomography (CT) images. Methods: In order to track the fuzzy liver–chestwall and liver–heart borders where oversegmentation is often found, a starting point and an ending point were first identified on the coronal view images; the fuzzy border was then determined as a geodesic curve constructed by minimizing the gradient-weighted path length between these two points near the fuzzy border.more » The minimization of path length was numerically solved by fast-marching method. The resultant fuzzy borders were incorporated into the authors’ automatic segmentation scheme, in which the liver was initially estimated by a patient-specific adaptive thresholding and then refined by a geodesic active contour model. By using planning CT images of 15 liver patients treated with stereotactic body radiation therapy, the liver contours extracted by the proposed computerized scheme were compared with those manually delineated by a radiation oncologist. Results: The proposed automatic liver segmentation method yielded an average Dice similarity coefficient of 0.930 ± 0.015, whereas it was 0.912 ± 0.020 if the fuzzy border tracking was not used. The application of fuzzy border tracking was found to significantly improve the segmentation performance. The mean liver volume obtained by the proposed method was 1727 cm{sup 3}, whereas it was 1719 cm{sup 3} for manual-outlined volumes. The computer-generated liver volumes achieved excellent agreement with manual-outlined volumes with correlation coefficient of 0.98. Conclusions: The proposed method was shown to provide accurate segmentation for liver in the planning CT images where contrast agent is not applied. The authors’ results also clearly demonstrated that the application of tracking the fuzzy borders could significantly reduce contour leakage during active contour evolution.« less
Harris, Kristen M.; Spacek, Josef; Bell, Maria Elizabeth; Parker, Patrick H.; Lindsey, Laurence F.; Baden, Alexander D.; Vogelstein, Joshua T.; Burns, Randal
2015-01-01
Resurgent interest in synaptic circuitry and plasticity has emphasized the importance of 3D reconstruction from serial section electron microscopy (3DEM). Three volumes of hippocampal CA1 neuropil from adult rat were imaged at X-Y resolution of ~2 nm on serial sections of ~50–60 nm thickness. These are the first densely reconstructed hippocampal volumes. All axons, dendrites, glia, and synapses were reconstructed in a cube (~10 μm3) surrounding a large dendritic spine, a cylinder (~43 μm3) surrounding an oblique dendritic segment (3.4 μm long), and a parallelepiped (~178 μm3) surrounding an apical dendritic segment (4.9 μm long). The data provide standards for identifying ultrastructural objects in 3DEM, realistic reconstructions for modeling biophysical properties of synaptic transmission, and a test bed for enhancing reconstruction tools. Representative synapses are quantified from varying section planes, and microtubules, polyribosomes, smooth endoplasmic reticulum, and endosomes are identified and reconstructed in a subset of dendrites. The original images, traces, and Reconstruct software and files are freely available and visualized at the Open Connectome Project (Data Citation 1). PMID:26347348
Interactive Medical Volume Visualization for Surgical Operations
2001-10-25
the preprocessing and processing stages, related medical brain tissues, which are skull, white matter, gray matter and pathology ( tumor ), are segmented ...from 12 or 16 bit data depths. NMR segmentation plays an important role in our work, because, classifying brain tissues from NMR slices requires an...performing segmentation of brain structures. Our segmentation process uses Self Organizing Feature Maps (SOFM) [12]. In SOM, on the contrary to Feedback
Vidavsky, Netta; Akiva, Anat; Kaplan-Ashiri, Ifat; Rechav, Katya; Addadi, Lia; Weiner, Steve; Schertel, Andreas
2016-12-01
Many important biological questions can be addressed by studying in 3D large volumes of intact, cryo fixed hydrated tissues (⩾10,000μm 3 ) at high resolution (5-20nm). This can be achieved using serial FIB milling and block face surface imaging under cryo conditions. Here we demonstrate the unique potential of the cryo-FIB-SEM approach using two extensively studied model systems; sea urchin embryos and the tail fin of zebrafish larvae. We focus in particular on the environment of mineral deposition sites. The cellular organelles, including mitochondria, Golgi, ER, nuclei and nuclear pores are made visible by the image contrast created by differences in surface potential of different biochemical components. Auto segmentation and/or volume rendering of the image stacks and 3D reconstruction of the skeleton and the cellular environment, provides a detailed view of the relative distribution in space of the tissue/cellular components, and thus of their interactions. Simultaneous acquisition of secondary and back-scattered electron images adds additional information. For example, a serial view of the zebrafish tail reveals the presence of electron dense mineral particles inside mitochondrial networks extending more than 20μm in depth in the block. Large volume imaging using cryo FIB SEM, as demonstrated here, can contribute significantly to the understanding of the structures and functions of diverse biological tissues. Copyright © 2016 Elsevier Inc. All rights reserved.
3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images
NASA Astrophysics Data System (ADS)
Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.
2014-12-01
Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.
Automatic segmentation of the puborectalis muscle in 3D transperineal ultrasound.
van den Noort, Frieda; Grob, Anique T M; Slump, Cornelis H; van der Vaart, Carl H; van Stralen, Marijn
2017-10-11
The introduction of 3D analysis of the puborectalis muscle, for diagnostic purposes, into daily practice is hindered by the need for appropriate training of the observers. Automatic 3D segmentation of the puborectalis muscle in 3D transperineal ultrasound may aid to its adaption in clinical practice. A manual 3D segmentation protocol was developed to segment the puborectalis muscle. The data of 20 women, in their first trimester of pregnancy, was used to validate the reproducibility of this protocol. For automatic segmentation, active appearance models of the puborectalis muscle were developed. Those models were trained using manual segmentation data of 50 women. The performance of both manual and automatic segmentation was analyzed by measuring the overlap and distance between the segmentations. Also, the interclass correlation coefficients and their 95% confidence intervals were determined for mean echogenicity and volume of the puborectalis muscle. The ICC values of mean echogenicity (0.968-0.991) and volume (0.626-0.910) are good to very good for both automatic and manual segmentation. The results of overlap and distance for manual segmentation are as expected, showing only few pixels (2-3) mismatch on average and a reasonable overlap. Based on overlap and distance 5 mismatches in automatic segmentation were detected, resulting in an automatic segmentation a success rate of 90%. In conclusion, this study presents a reliable manual and automatic 3D segmentation of the puborectalis muscle. This will facilitate future investigation of the puborectalis muscle. It also allows for reliable measurements of clinically potentially valuable parameters like mean echogenicity. This article is protected by copyright. All rights reserved.
Pleural effusion segmentation in thin-slice CT
NASA Astrophysics Data System (ADS)
Donohue, Rory; Shearer, Andrew; Bruzzi, John; Khosa, Huma
2009-02-01
A pleural effusion is excess fluid that collects in the pleural cavity, the fluid-filled space that surrounds the lungs. Surplus amounts of such fluid can impair breathing by limiting the expansion of the lungs during inhalation. Measuring the fluid volume is indicative of the effectiveness of any treatment but, due to the similarity to surround regions, fragments of collapsed lung present and topological changes; accurate quantification of the effusion volume is a difficult imaging problem. A novel code is presented which performs conditional region growth to accurately segment the effusion shape across a dataset. We demonstrate the applicability of our technique in the segmentation of pleural effusion and pulmonary masses.
Development of automatic visceral fat volume calculation software for CT volume data.
Nemoto, Mitsutaka; Yeernuer, Tusufuhan; Masutani, Yoshitaka; Nomura, Yukihiro; Hanaoka, Shouhei; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni
2014-01-01
To develop automatic visceral fat volume calculation software for computed tomography (CT) volume data and to evaluate its feasibility. A total of 24 sets of whole-body CT volume data and anthropometric measurements were obtained, with three sets for each of four BMI categories (under 20, 20 to 25, 25 to 30, and over 30) in both sexes. True visceral fat volumes were defined on the basis of manual segmentation of the whole-body CT volume data by an experienced radiologist. Software to automatically calculate visceral fat volumes was developed using a region segmentation technique based on morphological analysis with CT value threshold. Automatically calculated visceral fat volumes were evaluated in terms of the correlation coefficient with the true volumes and the error relative to the true volume. Automatic visceral fat volume calculation results of all 24 data sets were obtained successfully and the average calculation time was 252.7 seconds/case. The correlation coefficients between the true visceral fat volume and the automatically calculated visceral fat volume were over 0.999. The newly developed software is feasible for calculating visceral fat volumes in a reasonable time and was proved to have high accuracy.
Multicenter reliability of semiautomatic retinal layer segmentation using OCT
Oberwahrenbrock, Timm; Traber, Ghislaine L.; Lukas, Sebastian; Gabilondo, Iñigo; Nolan, Rachel; Songster, Christopher; Balk, Lisanne; Petzold, Axel; Paul, Friedemann; Villoslada, Pablo; Brandt, Alexander U.; Green, Ari J.
2018-01-01
Objective To evaluate the inter-rater reliability of semiautomated segmentation of spectral domain optical coherence tomography (OCT) macular volume scans. Methods Macular OCT volume scans of left eyes from 17 subjects (8 patients with MS and 9 healthy controls) were automatically segmented by Heidelberg Eye Explorer (v1.9.3.0) beta-software (Spectralis Viewing Module v6.0.0.7), followed by manual correction by 5 experienced operators from 5 different academic centers. The mean thicknesses within a 6-mm area around the fovea were computed for the retinal nerve fiber layer, ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer, outer plexiform layer (OPL), and outer nuclear layer (ONL). Intraclass correlation coefficients (ICCs) were calculated for mean layer thickness values. Spatial distribution of ICC values for the segmented volume scans was investigated using heat maps. Results Agreement between raters was good (ICC > 0.84) for all retinal layers, particularly inner retinal layers showed excellent agreement across raters (ICC > 0.96). Spatial distribution of ICC showed highest values in the perimacular area, whereas the ICCs were poorer for the foveola and the more peripheral macular area. The automated segmentation of the OPL and ONL required the most correction and showed the least agreement, whereas differences were less prominent for the remaining layers. Conclusions Automated segmentation with manual correction of macular OCT scans is highly reliable when performed by experienced raters and can thus be applied in multicenter settings. Reliability can be improved by restricting analysis to the perimacular area and compound segmentation of GCL and IPL. PMID:29552598
Oghli, Mostafa Ghelich; Dehlaghi, Vahab; Zadeh, Ali Mohammad; Fallahi, Alireza; Pooyan, Mohammad
2014-07-01
Assessment of cardiac right-ventricle functions plays an essential role in diagnosis of arrhythmogenic right ventricular dysplasia (ARVD). Among clinical tests, cardiac magnetic resonance imaging (MRI) is now becoming the most valid imaging technique to diagnose ARVD. Fatty infiltration of the right ventricular free wall can be visible on cardiac MRI. Finding right-ventricle functional parameters from cardiac MRI images contains segmentation of right-ventricle in each slice of end diastole and end systole phases of cardiac cycle and calculation of end diastolic and end systolic volume and furthermore other functional parameters. The main problem of this task is the segmentation part. We used a robust method based on deformable model that uses shape information for segmentation of right-ventricle in short axis MRI images. After segmentation of right-ventricle from base to apex in end diastole and end systole phases of cardiac cycle, volume of right-ventricle in these phases calculated and then, ejection fraction calculated. We performed a quantitative evaluation of clinical cardiac parameters derived from the automatic segmentation by comparison against a manual delineation of the ventricles. The manually and automatically determined quantitative clinical parameters were statistically compared by means of linear regression. This fits a line to the data such that the root-mean-square error (RMSE) of the residuals is minimized. The results show low RMSE for Right Ventricle Ejection Fraction and Volume (≤ 0.06 for RV EF, and ≤ 10 mL for RV volume). Evaluation of segmentation results is also done by means of four statistical measures including sensitivity, specificity, similarity index and Jaccard index. The average value of similarity index is 86.87%. The Jaccard index mean value is 83.85% which shows a good accuracy of segmentation. The average of sensitivity is 93.9% and mean value of the specificity is 89.45%. These results show the reliability of proposed method in these cases that manual segmentation is inapplicable. Huge shape variety of right-ventricle led us to use a shape prior based method and this work can develop by four-dimensional processing for determining the first ventricular slices.
Rundo, Leonardo; Stefano, Alessandro; Militello, Carmelo; Russo, Giorgio; Sabini, Maria Gabriella; D'Arrigo, Corrado; Marletta, Francesco; Ippolito, Massimo; Mauri, Giancarlo; Vitabile, Salvatore; Gilardi, Maria Carla
2017-06-01
Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [ 11 C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife ® treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTV MRI . A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment, the feasibility and the clinical value of BTV integration in Gamma Knife treatment planning were considered. Therefore, a qualitative evaluation was carried out by three experienced clinicians. The achieved experimental results showed that GTV and BTV segmentations are statistically correlated (Spearman's rank correlation coefficient: 0.898) but they have low similarity degree (average Dice Similarity Coefficient: 61.87 ± 14.64). Therefore, volume measurements as well as evaluation metrics values demonstrated that MRI and PET convey different but complementary imaging information. GTV and BTV could be combined to enhance treatment planning. In more than 50% of cases the CTV was strongly or moderately conditioned by metabolic imaging. Especially, BTV MRI enhanced the CTV more accurately than BTV in 25% of cases. The proposed fully automatic multimodal PET/MRI segmentation method is a valid operator-independent methodology helping the clinicians to define a CTV that includes both metabolic and morphologic information. BTV MRI and GTV should be considered for a comprehensive treatment planning. Copyright © 2017 Elsevier B.V. All rights reserved.
Dolz, Jose; Betrouni, Nacim; Quidet, Mathilde; Kharroubi, Dris; Leroy, Henri A; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien
2016-09-01
Delineation of organs at risk (OARs) is a crucial step in surgical and treatment planning in brain cancer, where precise OARs volume delineation is required. However, this task is still often manually performed, which is time-consuming and prone to observer variability. To tackle these issues a deep learning approach based on stacking denoising auto-encoders has been proposed to segment the brainstem on magnetic resonance images in brain cancer context. Additionally to classical features used in machine learning to segment brain structures, two new features are suggested. Four experts participated in this study by segmenting the brainstem on 9 patients who underwent radiosurgery. Analysis of variance on shape and volume similarity metrics indicated that there were significant differences (p<0.05) between the groups of manual annotations and automatic segmentations. Experimental evaluation also showed an overlapping higher than 90% with respect to the ground truth. These results are comparable, and often higher, to those of the state of the art segmentation methods but with a considerably reduction of the segmentation time. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakravarti, D.; Hendrix, P.E.; Wilkie, W.L.
1987-01-01
Maturing markets and the accompanying increase in competition, sophistication of customers, and differentiation of products and services have forced companies to focus their marketing efforts on segments in which they can prosper. The experience in these companies has revealed that market segmentation, although simple in concept, is not so easily implemented. It is reasonable to anticipate substantial benefits from additional market segmentation within each of the classes traditionally distinguished in the industry - residential, commercial, and industrial. Segmentation is also likely to prove useful for utilities facing quite different marketing environments, e.g., in terms of demand patterns (number of customers,more » winter- and summer-peaking, etc.), capacity, and degree of regulatory and competitive pressures. Within utilities, those charged with developing and implementing segmentation strategies face some difficult issues. The primary objective of this monograph is to provide some answers to these questions. This monograph is intended to provide utility researchers with a guide to the design and execution of market segmentation research in utility markets. Several composite cases, drawn from actual studies conducted by electric utilities, are used to illustrate the discussion.« less
Daugherty, Ana M; Flinn, Robert; Ofen, Noa
2017-06-01
Associative memory develops into adulthood and critically depends on the hippocampus. The hippocampus is a complex structure composed of subfields that are functionally-distinct, and anterior-posterior divisions along the length of the hippocampal horizontal axis that may also differ by cognitive correlates. Although each of these aspects has been considered independently, here we evaluate their relative contributions as correlates of age-related improvement in memory. Volumes of hippocampal subfields (subiculum, CA1-2, CA3-dentate gyrus) and anterior-posterior divisions (hippocampal head, body, tail) were manually segmented from high-resolution images in a sample of healthy participants (age 8-25 years). Adults had smaller CA3-dentate gyrus volume as compared to children, which accounted for 67% of the indirect effect of age predicting better associative memory via hippocampal volumes. Whereas hippocampal body volume demonstrated non-linear age differences, larger hippocampal body volume was weakly related to better associative memory only when accounting for the mutual correlation with subfields measured within that region. Thus, typical development of associative memory was largely explained by age-related differences in CA3-dentate gyrus. Copyright © 2017 Elsevier Inc. All rights reserved.
Daugherty, Ana M.; Flinn, Robert; Ofen, Noa
2017-01-01
Associative memory develops into adulthood and critically depends on the hippocampus. The hippocampus is a complex structure composed of subfields that are functionally-distinct, and anterior-posterior divisions along the length of the hippocampal horizontal axis that may also differ by cognitive correlates. Although each of these aspects has been considered independently, here we evaluate their relative contributions as correlates of age-related improvement in memory. Volumes of hippocampal subfields (subiculum, CA1-2, CA3-dentate gyrus) and anterior-posterior divisions (hippocampal head, body, tail) were manually segmented from high-resolution proton density-weighted images in a sample of healthy participants (age 8–25 years). Adults had smaller CA3-dentate gyrus volume as compared to children, which accounted for 67% of the indirect effect of age predicting better associative memory via hippocampal volumes. Whereas hippocampal body volume demonstrated non-linear age differences, larger hippocampal body volume was weakly related to better associative memory only when accounting for the mutual correlation with subfields measured within that region. Thus, typical development of associative memory was largely explained by age-related differences in CA3-dentate gyrus. PMID:28342999
Development of Image Segmentation Methods for Intracranial Aneurysms
Qian, Yi; Morgan, Michael
2013-01-01
Though providing vital means for the visualization, diagnosis, and quantification of decision-making processes for the treatment of vascular pathologies, vascular segmentation remains a process that continues to be marred by numerous challenges. In this study, we validate eight aneurysms via the use of two existing segmentation methods; the Region Growing Threshold and Chan-Vese model. These methods were evaluated by comparison of the results obtained with a manual segmentation performed. Based upon this validation study, we propose a new Threshold-Based Level Set (TLS) method in order to overcome the existing problems. With divergent methods of segmentation, we discovered that the volumes of the aneurysm models reached a maximum difference of 24%. The local artery anatomical shapes of the aneurysms were likewise found to significantly influence the results of these simulations. In contrast, however, the volume differences calculated via use of the TLS method remained at a relatively low figure, at only around 5%, thereby revealing the existence of inherent limitations in the application of cerebrovascular segmentation. The proposed TLS method holds the potential for utilisation in automatic aneurysm segmentation without the setting of a seed point or intensity threshold. This technique will further enable the segmentation of anatomically complex cerebrovascular shapes, thereby allowing for more accurate and efficient simulations of medical imagery. PMID:23606905
Perugini, E; Rapezzi, C; Piva, T; Leone, O; Bacchi‐Reggiani, L; Riva, L; Salvi, F; Lovato, L; Branzi, A; Fattori, R
2006-01-01
Objective To investigate the prevalence and distribution of gadolinium (Gd) enhancement at cardiac magnetic resonance (CMR) imaging in patients with cardiac amyloidosis (CA) and to look for associations with clinical, morphological, and functional features. Patients and design 21 patients with definitely diagnosed CA (nine with immunoglobulin light chain amyloidosis and 12 transthyretin related) underwent Gd‐CMR. Results Gd enhancement was detected in 16 of 21 (76%) patients. Sixty six of 357 (18%) segments were enhanced, more often at the mid ventricular level. Transmural extension of enhancement within each patient significantly correlated with left ventricular (LV) end systolic volume (r = 0.58). The number of enhanced segments correlated with LV end diastolic volume (r = 0.76), end systolic volume (r = 0.6), and left atrial size (r = 0.56). Segments with > 50% extensive transmural enhancement more often were severely hypokinetic or akinetic (p = 0.001). Patients with > 2 enhanced segments had significantly lower 12 lead QRS voltage and Sokolow‐Lyon index. No relation was apparent with any other clinical, morphological, functional, or histological characteristics. Conclusion Gd enhancement is common but not universally present in CA, probably due to expansion of infiltrated interstitium. The segmental and transmural distribution of the enhancement is highly variable, and mid‐ventricular regions are more often involved. Enhancement appears to be associated with impaired segmental and global contractility and a larger atrium. PMID:15939726
Quantification of regional fat volume in rat MRI
NASA Astrophysics Data System (ADS)
Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren
2003-05-01
Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.
Automated choroid segmentation of three-dimensional SD-OCT images by incorporating EDI-OCT images.
Chen, Qiang; Niu, Sijie; Fang, Wangyi; Shuai, Yuanlu; Fan, Wen; Yuan, Songtao; Liu, Qinghuai
2018-05-01
The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were -1.96µm3 and 88.56%, respectively. Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation. Copyright © 2017. Published by Elsevier B.V.
Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu
2017-05-01
This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.
Automated posterior cranial fossa volumetry by MRI: applications to Chiari malformation type I.
Bagci, A M; Lee, S H; Nagornaya, N; Green, B A; Alperin, N
2013-09-01
Quantification of PCF volume and the degree of PCF crowdedness were found beneficial for differential diagnosis of tonsillar herniation and prediction of surgical outcome in CMI. However, lack of automated methods limits the clinical use of PCF volumetry. An atlas-based method for automated PCF segmentation tailored for CMI is presented. The method performance is assessed in terms of accuracy and spatial overlap with manual segmentation. The degree of association between PCF volumes and the lengths of previously proposed linear landmarks is reported. T1-weighted volumetric MR imaging data with 1-mm isotropic resolution obtained with the use of a 3T scanner from 14 patients with CMI and 3 healthy subjects were used for the study. Manually delineated PCF from 9 patients was used to establish a CMI-specific reference for an atlas-based automated PCF parcellation approach. Agreement between manual and automated segmentation of 5 different CMI datasets was verified by means of the t test. Measurement reproducibility was established through the use of 2 repeated scans from 3 healthy subjects. Degree of linear association between PCF volume and 6 linear landmarks was determined by means of Pearson correlation. PCF volumes measured by use of the automated method and with manual delineation were similar, 196.2 ± 8.7 mL versus 196.9 ± 11.0 mL, respectively. The mean relative difference of -0.3 ± 1.9% was not statistically significant. Low measurement variability, with a mean absolute percentage value of 0.6 ± 0.2%, was achieved. None of the PCF linear landmarks were significantly associated with PCF volume. PCF and tissue content volumes can be reliably measured in patients with CMI by use of an atlas-based automated segmentation method.
McCracken, D Jay; Higginbotham, Raymond A; Boulter, Jason H; Liu, Yuan; Wells, John A; Halani, Sameer H; Saindane, Amit M; Oyesiku, Nelson M; Barrow, Daniel L; Olson, Jeffrey J
2017-06-01
Sphenoid wing meningiomas (SWMs) can encase arteries of the circle of Willis, increasing their susceptibility to intraoperative vascular injury and severe ischemic complications. To demonstrate the effect of circumferential vascular encasement in SWM on postoperative ischemia. A retrospective review of 75 patients surgically treated for SWM from 2009 to 2015 was undertaken to determine the degree of circumferential vascular encasement (0°-360°) as assessed by preoperative magnetic resonance imaging (MRI). A novel grading system describing "maximum" and "total" arterial encasement scores was created. Postoperative MRIs were reviewed for total ischemia volume measured on sequential diffusion-weighted images. Of the 75 patients, 89.3% had some degree of vascular involvement with a median maximum encasement score of 3.0 (2.0-3.0) in the internal carotid artery (ICA), M1, M2, and A1 segments; 76% of patients had some degree of ischemia with median infarct volume of 3.75 cm 3 (0.81-9.3 cm 3 ). Univariate analysis determined risk factors associated with larger infarction volume, which were encasement of the supraclinoid ICA ( P < .001), M1 segment ( P < .001), A1 segment ( P = .015), and diabetes ( P = .019). As the maximum encasement score increased from 1 to 5 in each of the significant arterial segments, so did mean and median infarction volume ( P < .001). Risk for devastating ischemic injury >62 cm 3 was found when the ICA, M1, and A1 vessels all had ≥360° involvement ( P = .001). Residual tumor was associated with smaller infarct volumes ( P = .022). As infarction volume increased, so did modified Rankin Score at discharge ( P = .025). Subtotal resection should be considered in SWM with significant vascular encasement of proximal arteries to limit postoperative ischemic complications. Copyright © 2017 by the Congress of Neurological Surgeons
Gompelmann, Daniela; Shah, Pallav L; Valipour, Arschang; Herth, Felix J F
2018-06-12
Bronchoscopic thermal vapor ablation (BTVA) represents one of the endoscopic lung volume reduction (ELVR) techniques that aims at hyperinflation reduction in patients with advanced emphysema to improve respiratory mechanics. By targeted segmental vapor ablation, an inflammatory response leads to tissue and volume reduction of the most diseased emphysematous segments. So far, BTVA has been demonstrated in several single-arm trials and 1 multinational randomized controlled trial to improve lung function, exercise capacity, and quality of life in patients with upper lobe-predominant emphysema irrespective of the collateral ventilation. In this review, we emphasize the practical aspects of this ELVR method. Patients with upper lobe-predominant emphysema, forced expiratory volume in 1 second (FEV1) between 20 and 45% of predicted, residual volume (RV) > 175% of predicted, and carbon monoxide diffusing capacity (DLCO) ≥20% of predicted can be considered for BTVA treatment. Prior to the procedure, a special software assists in identifying the target segments with the highest emphysema index, volume and the highest heterogeneity index to the untreated ipsilateral lung lobes. The procedure may be performed under deep sedation or preferably under general anesthesia. After positioning of the BTVA catheter and occlusion of the target segment by the occlusion balloon, heated water vapor is delivered in a predetermined specified time according to the vapor dose. After the procedure, patients should be strictly monitored to proactively detect symptoms of localized inflammatory reaction that may temporarily worsen the clinical status of the patient and to detect complications. As the data are still very limited, BTVA should be performed within clinical trials or comprehensive registries where the product is commercially available. © 2018 S. Karger AG, Basel.
Hippocampal subfield segmentation in temporal lobe epilepsy: Relation to outcomes.
Kreilkamp, B A K; Weber, B; Elkommos, S B; Richardson, M P; Keller, S S
2018-06-01
To investigate the clinical and surgical outcome correlates of preoperative hippocampal subfield volumes in patients with refractory temporal lobe epilepsy (TLE) using a new magnetic resonance imaging (MRI) multisequence segmentation technique. We recruited 106 patients with TLE and hippocampal sclerosis (HS) who underwent conventional T1-weighted and T2 short TI inversion recovery MRI. An automated hippocampal segmentation algorithm was used to identify twelve subfields in each hippocampus. A total of 76 patients underwent amygdalohippocampectomy and postoperative seizure outcome assessment using the standardized ILAE classification. Semiquantitative hippocampal internal architecture (HIA) ratings were correlated with hippocampal subfield volumes. Patients with left TLE had smaller volumes of the contralateral presubiculum and hippocampus-amygdala transition area compared to those with right TLE. Patients with right TLE had reduced contralateral hippocampal tail volumes and improved outcomes. In all patients, there were no significant relationships between hippocampal subfield volumes and clinical variables such as duration and age at onset of epilepsy. There were no significant differences in any hippocampal subfield volumes between patients who were rendered seizure free and those with persistent postoperative seizure symptoms. Ipsilateral but not contralateral HIA ratings were significantly correlated with gross hippocampal and subfield volumes. Our results suggest that ipsilateral hippocampal subfield volumes are not related to the chronicity/severity of TLE. We did not find any hippocampal subfield volume or HIA rating differences in patients with optimal and unfavorable outcomes. In patients with TLE and HS, sophisticated analysis of hippocampal architecture on MRI may have limited value for prediction of postoperative outcome. © 2018 The Authors. Acta Neurologica Scandinavica Published by John Wiley & Sons Ltd.
Studies of Big Data metadata segmentation between relational and non-relational databases
NASA Astrophysics Data System (ADS)
Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.
2015-12-01
In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.
Systolic Processor Array For Recognition Of Spectra
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Peterson, John C.
1995-01-01
Spectral signatures of materials detected and identified quickly. Spectral Analysis Systolic Processor Array (SPA2) relatively inexpensive and satisfies need to analyze large, complex volume of multispectral data generated by imaging spectrometers to extract desired information: computational performance needed to do this in real time exceeds that of current supercomputers. Locates highly similar segments or contiguous subsegments in two different spectra at time. Compares sampled spectra from instruments with data base of spectral signatures of known materials. Computes and reports scores that express degrees of similarity between sampled and data-base spectra.
LACIE performance predictor final operational capability program description, volume 3
NASA Technical Reports Server (NTRS)
1976-01-01
The requirements and processing logic for the LACIE Error Model program (LEM) are described. This program is an integral part of the Large Area Crop Inventory Experiment (LACIE) system. LEM is that portion of the LPP (LACIE Performance Predictor) which simulates the sample segment classification, strata yield estimation, and production aggregation. LEM controls repetitive Monte Carlo trials based on input error distributions to obtain statistical estimates of the wheat area, yield, and production at different levels of aggregation. LEM interfaces with the rest of the LPP through a set of data files.
Measurement of pelvic osteolytic lesions in follow-up studies after total hip arthroplasty
NASA Astrophysics Data System (ADS)
Castaneda, Benjamin; Tamez-Pena, Jose G.; Totterman, Saara; O'Keefe, Regis; Looney, R. John
2006-03-01
Previous studies have demonstrated the plausibility of using volumetric computerized tomography to provide an accurate representation and measurement of volume for pelvic osteolytic lesions following total hip joint replacement. These studies have been performed manually (or computed-assisted) by expert radiologists with the disadvantage of poor reproducibility of the experiment. The purpose of this work is to minimize the effect of user interaction in these experiments by introducing Laplacian level set methods in the volume segmentation process and using temporal articulated registration in order to follow the evolution of a lesion over time. Laplacian level set methods reduce the inter and intra-observer variability by attaching the segmented contour to edges defined in the image while keeping smoothness. The registration process allows the information of the lesion from the first visit to be used in the segmentation process of the current visit. This work compares the automated results on 7 volunteers versus the volume measured manually. Results have shown that the proposed technique is able to track osteolytic lesions and detect changes in volume over time. Intra-reader and inter-observer variabilities were reduced.
NASA Technical Reports Server (NTRS)
Agnew, Donald L.; Jones, Peter A.
1989-01-01
A study was conducted to define reasonable and representative LDR system concepts for the purpose of defining a technology development program aimed at providing the requisite technological capability necessary to start LDR development by the end of 1991. This volume presents thirteen technology assessments and technology development plans, as well as an overview and summary of the LDR concepts. Twenty-two proposed augmentation projects are described (selected from more than 30 candidates). The five LDR technology areas most in need of supplementary support are: cryogenic cooling; astronaut assembly of the optically precise LDR in space; active segmented primary mirror; dynamic structural control; and primary mirror contamination control. Three broad, time-phased, five-year programs were synthesized from the 22 projects, scheduled, and funding requirements estimated.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.
2016-03-01
We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, F.; Graduate Program in Biomedical Engineering, University of Western Ontario, London, Ontario N6A 5B9; Svenningsen, S.
Purpose: Pulmonary magnetic-resonance-imaging (MRI) and x-ray computed-tomography have provided strong evidence of spatially and temporally persistent lung structure-function abnormalities in asthmatics. This has generated a shift in their understanding of lung disease and supports the use of imaging biomarkers as intermediate endpoints of asthma severity and control. In particular, pulmonary {sup 1}H MRI can be used to provide quantitative lung structure-function measurements longitudinally and in response to treatment. However, to translate such biomarkers of asthma, robust methods are required to segment the lung from pulmonary {sup 1}H MRI. Therefore, their objective was to develop a pulmonary {sup 1}H MRI segmentationmore » algorithm to provide regional measurements with the precision and speed required to support clinical studies. Methods: The authors developed a method to segment the left and right lung from {sup 1}H MRI acquired in 20 asthmatics including five well-controlled and 15 severe poorly controlled participants who provided written informed consent to a study protocol approved by Health Canada. Same-day spirometry and plethysmography measurements of lung function and volume were acquired as well as {sup 1}H MRI using a whole-body radiofrequency coil and fast spoiled gradient-recalled echo sequence at a fixed lung volume (functional residual capacity + 1 l). We incorporated the left-to-right lung volume proportion prior based on the Potts model and derived a volume-proportion preserved Potts model, which was approximated through convex relaxation and further represented by a dual volume-proportion preserved max-flow model. The max-flow model led to a linear problem with convex and linear equality constraints that implicitly encoded the proportion prior. To implement the algorithm, {sup 1}H MRI was resampled into ∼3 × 3 × 3 mm{sup 3} isotropic voxel space. Two observers placed seeds on each lung and on the background of 20 pulmonary {sup 1}H MR images in a randomized dataset, on five occasions, five consecutive days in a row. Segmentation accuracy was evaluated using the Dice-similarity-coefficient (DSC) of the segmented thoracic cavity with comparison to five-rounds of manual segmentation by an expert observer. The authors also evaluated the root-mean-squared-error (RMSE) of the Euclidean distance between lung surfaces, the absolute, and percent volume error. Reproducibility was measured using the coefficient of variation (CoV) and intraclass correlation coefficient (ICC) for two observers who repeated segmentation measurements five-times. Results: For five well-controlled asthmatics, forced expiratory volume in 1 s (FEV{sub 1})/forced vital capacity (FVC) was 83% ± 7% and FEV{sub 1} was 86 ± 9%{sub pred}. For 15 severe, poorly controlled asthmatics, FEV{sub 1}/FV C = 66% ± 17% and FEV{sub 1} = 72 ± 27%{sub pred}. The DSC for algorithm and manual segmentation was 91% ± 3%, 92% ± 2% and 91% ± 2% for the left, right, and whole lung, respectively. RMSE was 4.0 ± 1.0 mm for each of the left, right, and whole lung. The absolute (percent) volume errors were 0.1 l (∼6%) for each of right and left lung and ∼0.2 l (∼6%) for whole lung. Intra- and inter-CoV (ICC) were <0.5% (>0.91%) for DSC and <4.5% (>0.93%) for RMSE. While segmentation required 10 s including ∼6 s for user interaction, the smallest detectable difference was 0.24 l for algorithm measurements which was similar to manual measurements. Conclusions: This lung segmentation approach provided the necessary and sufficient precision and accuracy required for research and clinical studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derksen, A; Koenig, L; Heldmann, S
Purpose: To improve results of deformable image registration (DIR) in adaptive radiotherapy for large bladder deformations in CT/CBCT pelvis imaging. Methods: A variational multi-modal DIR algorithm is incorporated in a joint iterative scheme, alternating between segmentation based bladder matching and registration. Using an initial DIR to propagate the bladder contour to the CBCT, in a segmentation step the contour is improved by discrete image gradient sampling along all surface normals and adapting the delineation to match the location of each maximum (with a search range of +−5/2mm at the superior/inferior bladder side and step size of 0.5mm). An additional graph-cutmore » based constraint limits the maximum difference between neighboring points. This improved contour is utilized in a subsequent DIR with a surface matching constraint. By calculating an euclidean distance map of the improved contour surface, the new constraint enforces the DIR to map each point of the original contour onto the improved contour. The resulting deformation is then used as a starting guess to compute a deformation update, which can again be used for the next segmentation step. The result is a dense deformation, able to capture much larger bladder deformations. The new method is evaluated on ten CT/CBCT male pelvis datasets, calculating Dice similarity coefficients (DSC) between the final propagated bladder contour and a manually delineated gold standard on the CBCT image. Results: Over all ten cases, an average DSC of 0.93±0.03 is achieved on the bladder. Compared with the initial DIR (0.88±0.05), the DSC is equal (2 cases) or improved (8 cases). Additionally, DSC accuracy of femoral bones (0.94±0.02) was not affected. Conclusion: The new approach shows that using the presented alternating segmentation/registration approach, the results of bladder DIR in the pelvis region can be greatly improved, especially for cases with large variations in bladder volume. Fraunhofer MEVIS received funding from a research grant by Varian Medical Systems.« less
Comparison of volume estimation methods for pancreatic islet cells
NASA Astrophysics Data System (ADS)
Dvořák, JiřÃ.; Å vihlík, Jan; Habart, David; Kybic, Jan
2016-03-01
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.
Lee, Ki Nam; Yoon, Seong Kuk; Sohn, Choon Hee; Choi, Pil Jo; Webb, W Richard
2002-01-01
To evaluate the influence of lung volume on dependent lung opacity seen at thin-section CT. In thirteen healthy volunteers, thin-section CT scans were performed at three levels (upper, mid, and lower portion of the lung) and at different lung volumes (10, 30, 50, and 100% vital capacity), using spirometric gated CT. Using a three-point scale, two radiologists determined whether dependent opacity was present, and estimated its degree. Regional lung attenuation at a level 2 cm above the diaphragm was determined using semiautomatic segmentation, and the diameter of a branch of the right lower posterior basal segmental artery was measured at each different vital capacity. At all three anatomic levels, dependent opacity occurred significantly more often at lower vital capacities (10, 30%) than at 100% vital capacity (p = 0.001). Visually estimated dependent opacity was significantly related to regional lung attenuation (p < 0.0001), which in dependent areas progressively increased as vital capacity decreased (p < 0.0001). The presence of dependent opacity and regional lung attenuation of a dependent area correlated significantly with increased diameter of a segmental arterial branch (r = 0.493 and p = 0.0002; r = 0.486 and p = 0.0003, respectively). Visual estimation and CT measurements of dependent opacity obtained by semiautomatic segmentation are significantly influenced by lung volume and are related to vascular diameter.
NASA Astrophysics Data System (ADS)
Riggs, S. R.; Thieler, E. R.; Mallinson, D. A.; Culver, S. J.; Corbett, D. R.; Hoffman, C. W.
2002-12-01
The NE North Carolina coastal system contains an exceptionally thick and well preserved Quaternary stratigraphic record that is the focus of a five-year Cooperative Coastal Geology Program between the USGS, several academic institutions, and state agencies. The major goal is to map this Quaternary section on the inner continental shelf, Outer Banks barrier islands, Albemarle-Pamlico estuarine system, and adjacent land areas. The program objectives are to define the geologic framework, develop the detailed evolutionary history, and understand the ongoing process dynamics driving this large, complex, and rapidly changing, high-energy coastal system. Preliminary data synthesis demonstrates that the major controls dictating the present health and future evolution of this coastal system include the following. 1) The regional late Pleistocene morphology constitutes the underlying geologic framework that the Holocene system has inherited. 2) The controlling paleotopography is a series of lowstand drainage basins consisting of trunk and tributary streams and associated interstream divides that are being drowned. 3) Three major sediment sources dictate the highly variable sand resources available to specific barrier segments and include riverine channel and deltaic deposits associated with lowstand trunk streams, the large cross-shelf cape shoal sand deposits, and sand-rich units occurring within the adjacent shoreface and inner-self strata. 4) Wherever large sand supplies have historically been available, the barrier segments occur as complex islands with large sand volumes producing high and wide barriers, whereas barrier segments without adequate sand supplies are sediment starved and occur as simple overwash barriers. 5) Human modification of the barrier islands over the past seven decades represents a major force that has significantly changed the barrier island dynamics and evolution. 6) The Albemarle Embayment appears to have a slightly higher rate of sea-level rise than adjacent regions due to a slow rate of regional subsidence. Consequently, if the ongoing pattern of storm activity and sea-level rise either continues or increases during the next few decades to centuries, several simple overwash barrier segments on the Outer Banks, that are currently disintegrating, will ultimately collapse into Pamlico Sound. These barrier segments will likely back-step across the open marine Pamlico Embayment and reform on the landward side. A few sand-rich complex barrier segments will persist as isolated, but perched and eroding islands for some longer period of time. In contrast, simple overwash barrier segments that have received minimal human modification and are associated with narrow and shallow back-barrier sounds, appear to be maintaining themselves in their upward and landward migration in response to ongoing storms and sea-level rise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thibault, Isabelle; Department of Radiation Oncology, Centre Hospitalier de L'Universite de Québec–Université Laval, Quebec, Quebec; Whyne, Cari M.
Purpose: To determine a threshold of vertebral body (VB) osteolytic or osteoblastic tumor involvement that would predict vertebral compression fracture (VCF) risk after stereotactic body radiation therapy (SBRT), using volumetric image-segmentation software. Methods and Materials: A computational semiautomated skeletal metastasis segmentation process refined in our laboratory was applied to the pretreatment planning CT scan of 100 vertebral segments in 55 patients treated with spine SBRT. Each VB was segmented and the percentage of lytic and/or blastic disease by volume determined. Results: The cumulative incidence of VCF at 3 and 12 months was 14.1% and 17.3%, respectively. The median follow-up was 7.3 months (range,more » 0.6-67.6 months). In all, 56% of segments were determined lytic, 23% blastic, and 21% mixed, according to clinical radiologic determination. Within these 3 clinical cohorts, the segmentation-determined mean percentages of lytic and blastic tumor were 8.9% and 6.0%, 0.2% and 26.9%, and 3.4% and 15.8% by volume, respectively. On the basis of the entire cohort (n=100), a significant association was observed for the osteolytic percentage measures and the occurrence of VCF (P<.001) but not for the osteoblastic measures. The most significant lytic disease threshold was observed at ≥11.6% (odds ratio 37.4, 95% confidence interval 9.4-148.9). On multivariable analysis, ≥11.6% lytic disease (P<.001), baseline VCF (P<.001), and SBRT with ≥20 Gy per fraction (P=.014) were predictive. Conclusions: Pretreatment lytic VB disease volumetric measures, independent of the blastic component, predict for SBRT-induced VCF. Larger-scale trials evaluating our software are planned to validate the results.« less
NASA Technical Reports Server (NTRS)
1997-01-01
An AGATE Concepts Demonstration was conducted at the Annual National Air Transportation Association (NATA) Convention in 1997. Following, a 5-minute introductory briefing, an interactive simulation of a single-pilot, single-engine aircraft was conducted. The participant was able to take off, fly a brief enroute segment, fly a Global Positioning System (GPS) approach and landing, and repeat the approach and landing segment. The participant was provided an advanced 'highway-in-the-sky' presentation on both a simulated head-up display and on a large LCD head-down display to follow throughout the flight. A single-lever power control and display concept was also provided for control of the engine throughout the flight. A second head-down, multifunction display in the instrument panel provided a moving map display for navigation purposes and monitoring of the status of the aircraft's systems.
Nitric oxide-dependent neutrophil recruitment: role in nasal secretion.
Cardell, L O; Agustí, C; Nadel, J A
2000-12-01
Leukotriene B4 (LTB4), an inflammatory mediator, is a potent chemoattractant for neutrophils that plays an important role in nasal secretion via release of elastase. Nitric oxide (NO) is an important modulator of leucocyte-endothelial cell interactions, endogenously produced in large quantities in the paranasal sinuses. To examine the role of NO in LTB4-stimulated nasal secretion. A newly-developed method for isolating and superfusing a nasal segment in dogs was used. Instillation of LTB4 into the nasal segment caused a time-dependent increase in the volume of airway fluid and in the recruitment of neutrophils. N(G)-nitro-L-arginine-methylester (L-NAME), an inhibitor of NO synthase, prevented LTB4-induced neutrophil recruitment and nasal secretion. These studies show that NO modulates LTB4-induced neutrophil recruitment and subsequent fluid secretion in the nose, and they suggest a therapeutic role for NO inhibitors in modulating neutrophil-dependent nasal secretion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidil, Thomas; Hampu, Nicholas; Hillmyer, Marc A.
A lamellar diblock polymer combining a cross-linkable segment with a chemically etchable segment was cross-linked above its order–disorder temperature (TODT) to kinetically trap the morphology associated with the fluctuating disordered state. After removal of the etchable block, evaluation of the resulting porous thermoset allows for an unprecedented experimental characterization of the trapped disordered phase. Through a combination of small-angle X-ray scattering, nitrogen sorption, scanning electron microscopy, and electron tomography experiments we demonstrate that the nanoporous structure exhibits a narrow pore size distribution and a high surface to volume ratio and is bicontinuous over a large sample area. Together with themore » processability of the polymeric starting material, the proposed system combines attractive attributes for many advanced applications. In particular, it was used to design new composite membranes for the ultrafiltration of water.« less
NASA Astrophysics Data System (ADS)
Dicente Cid, Yashin; Mamonov, Artem; Beers, Andrew; Thomas, Armin; Kovalev, Vassili; Kalpathy-Cramer, Jayashree; Müller, Henning
2017-03-01
The analysis of large data sets can help to gain knowledge about specific organs or on specific diseases, just as big data analysis does in many non-medical areas. This article aims to gain information from 3D volumes, so the visual content of lung CT scans of a large number of patients. In the case of the described data set, only little annotation is available on the patients that were all part of an ongoing screening program and besides age and gender no information on the patient and the findings was available for this work. This is a scenario that can happen regularly as image data sets are produced and become available in increasingly large quantities but manual annotations are often not available and also clinical data such as text reports are often harder to share. We extracted a set of visual features from 12,414 CT scans of 9,348 patients that had CT scans of the lung taken in the context of a national lung screening program in Belarus. Lung fields were segmented by two segmentation algorithms and only cases where both algorithms were able to find left and right lung and had a Dice coefficient above 0.95 were analyzed. This assures that only segmentations of good quality were used to extract features of the lung. Patients ranged in age from 0 to 106 years. Data analysis shows that age can be predicted with a fairly high accuracy for persons under 15 years. Relatively good results were also obtained between 30 and 65 years where a steady trend is seen. For young adults and older people the results are not as good as variability is very high in these groups. Several visualizations of the data show the evolution patters of the lung texture, size and density with age. The experiments allow learning the evolution of the lung and the gained results show that even with limited metadata we can extract interesting information from large-scale visual data. These age-related changes (for example of the lung volume, the density histogram of the tissue) can also be taken into account for the interpretation of new cases. The database used includes patients that had suspicions on a chest X-ray, so it is not a group of healthy people, and only tendencies and not a model of a healthy lung at a specific age can be derived.
A novel content-based active contour model for brain tumor segmentation.
Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal
2012-06-01
Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.
A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations
Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary
2016-01-01
There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699
Automated renal histopathology: digital extraction and quantification of renal pathology
NASA Astrophysics Data System (ADS)
Sarder, Pinaki; Ginley, Brandon; Tomaszewski, John E.
2016-03-01
The branch of pathology concerned with excess blood serum proteins being excreted in the urine pays particular attention to the glomerulus, a small intertwined bunch of capillaries located at the beginning of the nephron. Normal glomeruli allow moderate amount of blood proteins to be filtered; proteinuric glomeruli allow large amount of blood proteins to be filtered. Diagnosis of proteinuric diseases requires time intensive manual examination of the structural compartments of the glomerulus from renal biopsies. Pathological examination includes cellularity of individual compartments, Bowman's and luminal space segmentation, cellular morphology, glomerular volume, capillary morphology, and more. Long examination times may lead to increased diagnosis time and/or lead to reduced precision of the diagnostic process. Automatic quantification holds strong potential to reduce renal diagnostic time. We have developed a computational pipeline capable of automatically segmenting relevant features from renal biopsies. Our method first segments glomerular compartments from renal biopsies by isolating regions with high nuclear density. Gabor texture segmentation is used to accurately define glomerular boundaries. Bowman's and luminal spaces are segmented using morphological operators. Nuclei structures are segmented using color deconvolution, morphological processing, and bottleneck detection. Average computation time of feature extraction for a typical biopsy, comprising of ~12 glomeruli, is ˜69 s using an Intel(R) Core(TM) i7-4790 CPU, and is ~65X faster than manual processing. Using images from rat renal tissue samples, automatic glomerular structural feature estimation was reproducibly demonstrated for 15 biopsy images, which contained 148 individual glomeruli images. The proposed method holds immense potential to enhance information available while making clinical diagnoses.
A semi-automated image analysis procedure for in situ plankton imaging systems.
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.
A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260
Field lens multiplexing in holographic 3D displays by using Bragg diffraction based volume gratings
NASA Astrophysics Data System (ADS)
Fütterer, G.
2016-11-01
Applications, which can profit from holographic 3D displays, are the visualization of 3D data, computer-integrated manufacturing, 3D teleconferencing and mobile infotainment. However, one problem of holographic 3D displays, which are e.g. based on space bandwidth limited reconstruction of wave segments, is to realize a small form factor. Another problem is to provide a reasonable large volume for the user placement, which means to provide an acceptable freedom of movement. Both problems should be solved without decreasing the image quality of virtual and real object points, which are generated within the 3D display volume. A diffractive optical design using thick hologram gratings, which can be referred to as Bragg diffraction based volume gratings, can provide a small form factor and high definition natural viewing experience of 3D objects. A large collimated wave can be provided by an anamorphic backlight unit. The complex valued spatial light modulator add local curvatures to the wave field he is illuminated with. The modulated wave field is focused onto to the user plane by using a volume grating based field lens. Active type liquid crystal gratings provide 1D fine tracking of approximately +/- 8° deg. Diffractive multiplex has to be implemented for each color and for a set of focus functions providing coarse tracking. Boundary conditions of the diffractive multiplexing are explained. This is done in regards to the display layout and by using the coupled wave theory (CWT). Aspects of diffractive cross talk and its suppression will be discussed including longitudinal apodized volume gratings.
Michigan urban trunkline segments safety performance functions (SPFs) : final report.
DOT National Transportation Integrated Search
2016-07-01
This study involves the development of safety performance functions (SPFs) for urban and suburban trunkline segments in the : state of Michigan. Extensive databases were developed through the integration of traffic crash information, traffic volumes,...
Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.
Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto
2016-04-01
MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.
Multi-stage learning for robust lung segmentation in challenging CT volumes.
Sofka, Michal; Wetzl, Jens; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin
2011-01-01
Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.
External stent for repair of secondary tracheomalacia.
Johnston, M R; Loeber, N; Hillyer, P; Stephenson, L W; Edmunds, L H
1980-09-01
Tracheomalacia was created in anesthetized piglets by submucosal resection of 3 to 5 tracheal cartilages. Measurements of airway pressure and flow showed that expiratory airway resistance is maximal at low lung volumes and is significantly increased by creation of the malacic segment. Cervical flexion increases expiratory airway resistance, whereas hyperextension of the neck reduces resistance toward normal. External stenting of the malacic segment reduces expiratory airway resistance, and the combination of external stenting and hyperextension restores airway resistance to normal except at low lung volume. Two patients with secondary tracheomalacia required tracheostomy and could not be decannulated after the indication for the tracheostomy was corrected. Both were successfully decannulated after external stenting of the malacic segment with rib grafts. Postoperative measurements of expiratory pulmonary resistance show a marked decrease from preoperative measurements. External stenting of symptomatic tracheomalacia reduces expiratory airway resistance by supporting and stretching the malacic segment and is preferable to prolonged internal stenting or tracheal resection.
Automated bone segmentation from large field of view 3D MR images of the hip joint
NASA Astrophysics Data System (ADS)
Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart
2013-10-01
Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.
Automated bone segmentation from large field of view 3D MR images of the hip joint.
Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart
2013-10-21
Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.
NASA Astrophysics Data System (ADS)
Hatze, Herbert; Baca, Arnold
1993-01-01
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
Lymph node segmentation on CT images by a shape model guided deformable surface methodh
NASA Astrophysics Data System (ADS)
Maleike, Daniel; Fabel, Michael; Tetzlaff, Ralf; von Tengg-Kobligk, Hendrik; Heimann, Tobias; Meinzer, Hans-Peter; Wolf, Ivo
2008-03-01
With many tumor entities, quantitative assessment of lymph node growth over time is important to make therapy choices or to evaluate new therapies. The clinical standard is to document diameters on transversal slices, which is not the best measure for a volume. We present a new algorithm to segment (metastatic) lymph nodes and evaluate the algorithm with 29 lymph nodes in clinical CT images. The algorithm is based on a deformable surface search, which uses statistical shape models to restrict free deformation. To model lymph nodes, we construct an ellipsoid shape model, which strives for a surface with strong gradients and user-defined gray values. The algorithm is integrated into an application, which also allows interactive correction of the segmentation results. The evaluation shows that the algorithm gives good results in the majority of cases and is comparable to time-consuming manual segmentation. The median volume error was 10.1% of the reference volume before and 6.1% after manual correction. Integrated into an application, it is possible to perform lymph node volumetry for a whole patient within the 10 to 15 minutes time limit imposed by clinical routine.
NASA Astrophysics Data System (ADS)
Jia, F.; Lichti, D.
2017-09-01
The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.
Dimension of ring polymers in bulk studied by Monte-Carlo simulation and self-consistent theory.
Suzuki, Jiro; Takano, Atsushi; Deguchi, Tetsuo; Matsushita, Yushu
2009-10-14
We studied equilibrium conformations of ring polymers in melt over the wide range of segment number N of up to 4096 with Monte-Carlo simulation and obtained N dependence of radius of gyration of chains R(g). The simulation model used is bond fluctuation model (BFM), where polymer segments bear excluded volume; however, the excluded volume effect vanishes at N-->infinity, and linear polymer can be regarded as an ideal chain. Simulation for ring polymers in melt was performed, and the nu value in the relationship R(g) proportional to N(nu) is decreased gradually with increasing N, and finally it reaches the limiting value, 1/3, in the range of N>or=1536, i.e., R(g) proportional to N(1/3). We confirmed that the simulation result is consistent with that of the self-consistent theory including the topological effect and the osmotic pressure of ring polymers. Moreover, the averaged chain conformation of ring polymers in equilibrium state was given in the BFM. In small N region, the segment density of each molecule near the center of mass of the molecule is decreased with increasing N. In large N region the decrease is suppressed, and the density is found to be kept constant without showing N dependence. This means that ring polymer molecules do not segregate from the other molecules even if ring polymers in melt have the relationship nu=1/3. Considerably smaller dimensions of ring polymers at high molecular weight are due to their inherent nature of having no chain ends, and hence they have less-entangled conformations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labine, Alexandre; Carrier, Jean-François; Bedwani, Stéphane
2014-08-15
Purpose: To investigate an automatic bronchial and vessel bifurcations detection algorithm for deformable image registration (DIR) assessment to improve lung cancer radiation treatment. Methods: 4DCT datasets were acquired and exported to Varian treatment planning system (TPS) EclipseTM for contouring. The lungs TPS contour was used as the prior shape for a segmentation algorithm based on hierarchical surface deformation that identifies the deformed lungs volumes of the 10 breathing phases. Hounsfield unit (HU) threshold filter was applied within the segmented lung volumes to identify blood vessels and airways. Segmented blood vessels and airways were skeletonised using a hierarchical curve-skeleton algorithm basedmore » on a generalized potential field approach. A graph representation of the computed skeleton was generated to assign one of three labels to each node: the termination node, the continuation node or the branching node. Results: 320 ± 51 bifurcations were detected in the right lung of a patient for the 10 breathing phases. The bifurcations were visually analyzed. 92 ± 10 bifurcations were found in the upper half of the lung and 228 ± 45 bifurcations were found in the lower half of the lung. Discrepancies between ten vessel trees were mainly ascribed to large deformation and in regions where the HU varies. Conclusions: We established an automatic method for DIR assessment using the morphological information of the patient anatomy. This approach allows a description of the lung's internal structure movement, which is needed to validate the DIR deformation fields for accurate 4D cancer treatment planning.« less
Rhythmic contractility in the hepatic portal "corkscrew" vein of the rat snake.
Conklin, Daniel J; Lillywhite, Harvey B; Bishop, Barbara; Hargens, Alan R; Olson, Kenneth R
2009-03-01
Terrestrial, but not aquatic, species of snakes have hepatic portal veins with a corkscrew morphology immediately posterior of the liver. Relatively large volumes of venous blood are associated with this region, and the corkscrew vein has been proposed to function as a bidirectional valve that impedes gravitational shifts of intravascular volume. To better understand the functional significance of the corkscrew anatomy, we investigated the histology and contractile mechanisms in isolated corkscrew segments of the hepatic portal vein of a yellow rat snake (Pantherophis obsoletus). Morphologically, the corkscrew portal vein is here shown to have two distinct layers of smooth muscle--an inner circular layer, and an outer longitudinal layer, separated by a layer of collagen--whereas only a single circular layer of smooth muscle is present in the adjacent posterior caval vein. Low frequency (approximately 0.3 cycles*min(-1)) spontaneous and catecholamine-induced rhythms were observed in 11% and 89% of portal vein segments, respectively, but neither spontaneous nor agonist-induced cycling was observed in adjacent posterior (non-corkscrew) caval veins. Catecholamines, angiotensin II, or stretch increased the amplitude and/or frequency of contractile cycles. Ouabain, verapamil or indomethacin, but not tetrodotoxin, alpha-, or ss-adrenergic receptor antagonists, inhibited cyclical contractions indicating a dependence of these cycles on Na+/K+ ATPase, extracellular Ca2+ and prostanoid(s). These data suggest that the rhythmic contractility of the corkscrew segment of the ophidian portal vein may act in conjunction with its morphological features to improve venous return and to prevent retrograde shifts of blood that might otherwise pool in posterior veins.
Müller-Eschner, Matthias; Müller, Tobias; Biesdorf, Andreas; Wörz, Stefan; Rengier, Fabian; Böckler, Dittmar; Kauczor, Hans-Ulrich; Rohr, Karl; von Tengg-Kobligk, Hendrik
2014-04-01
Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm(3)) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm(3)) (P<0.001). 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA.
Müller-Eschner, Matthias; Müller, Tobias; Biesdorf, Andreas; Wörz, Stefan; Rengier, Fabian; Böckler, Dittmar; Kauczor, Hans-Ulrich; Rohr, Karl
2014-01-01
Introduction Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. Methods and materials Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. Results Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm3) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm3) (P<0.001). Conclusions 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA. PMID:24834406
Burghard, Philipp; Plank, Fabian; Beyer, Christoph; Müller, Silvana; Dörler, Jakob; Zaruba, Marc-Michael; Pölzl, Leo; Pölzl, Gerhard; Klauser, Andrea; Rauch, Stefan; Barbieri, Fabian; Langer, Christian-Ekkehardt; Schgoer, Wilfried; Williamson, Eric E; Feuchtner, Gudrun
2018-06-04
To evaluate right ventricle (RV) function by coronary computed tomography angiography (CTA) using a novel automated three-dimensional (3D) RV volume segmentation tool in comparison with clinical reference modalities. Twenty-six patients with severe end-stage heart failure [left ventricle (LV) ejection fraction (EF) <35%] referred to CTA were enrolled. A specific individually tailored biphasic contrast agent injection protocol was designed (80%/20% high/low flow) was designed. Measurement of RV function [EF, end-diastolic volume (EDV), end-systolic volume (ESV)] by CTA was compared with tricuspid annular plane systolic excursion (TAPSE) by transthoracic echocardiography (TTE) and right heart invasive catheterisation (IC). Automated 3D RV volume segmentation was successful in 26 (100%) patients. Read-out time was 3 min 33 s (range, 1 min 50s-4 min 33s). RV EF by CTA was stronger correlated with right atrial pressure (RAP) by IC (r = -0.595; p = 0.006) but weaker with TAPSE (r = 0.366, p = 0.94). When comparing TAPSE with RAP by IC (r = -0.317, p = 0.231), a weak-to-moderate non-significant inverse correlation was found. Interobserver correlation was high with r = 0.96 (p < 0.001), r = 0.86 (p < 0.001) and r = 0.72 (p = 0.001) for RV EDV, ESV and EF, respectively. CT attenuation of the right atrium (RA) and right ventricle (RV) was 196.9 ± 75.3 and 217.5 ± 76.1 HU, respectively. Measurement of RV function by CTA using a novel 3D volumetric segmentation tool is fast and reliable by applying a dedicated biphasic injection protocol. The RV EF from CTA is a closer surrogate of RAP than TAPSE by TTE. • Evaluation of RV function by cardiac CTA by using a novel 3D volume segmentation tool is fast and reliable. • A biphasic contrast agent injection protocol ensures homogenous RV contrast attenuation. • Cardiac CT is a valuable alternative modality to CMR for the evaluation of RV function.
Reduce volume of head-up display by image stitching
NASA Astrophysics Data System (ADS)
Chiu, Yi-Feng; Su, Guo-Dung J.
2016-09-01
Head-up Display (HUD) is a safety feature for automobile drivers. Although there have been some HUD systems in commercial product already, their images are too small to show assistance information. Another problem, the volume of HUD is too large. We proposed a HUD including micro-projectors, rear-projection screen, microlens array (MLA) and the light source is 28 mm x 14 mm realized a 200 mm x 100 mm image in 3 meters from drivers. We want to use the MLA to reduce the volume by virtual image stitching. We design the HUD's package dimensions is 12 cm x 12 cm x 9 cm. It is able to show speed, map-navigation and night vision information. We used Liquid Crystal Display (LCD) as our image source due to its brighter image output required and the minimum volume occupancy. The MLA is a multi aperture system. The proposed MLA consists of many optical channels each transmitting a segment of the whole field of view. The design of the system provides the stitching of the partial images, so that we can see the whole virtual image.
Ogris, Kathrin; Petrovic, Andreas; Scheicher, Sylvia; Sprenger, Hanna; Urschler, Martin; Hassler, Eva Maria; Yen, Kathrin; Scheurer, Eva
2017-06-01
In legal medicine, reliable localization and analysis of hematomas in subcutaneous fatty tissue is required for forensic reconstruction. Due to the absence of ionizing radiation, magnetic resonance imaging (MRI) is particularly suited to examining living persons with forensically relevant injuries. However, there is limited experience regarding MRI signal properties of hemorrhage in soft tissue. The aim of this study was to evaluate MR sequences with respect to their ability to show high contrast between hematomas and subcutaneous fatty tissue as well as to reliably determine the volume of artificial hematomas. Porcine tissue models were prepared by injecting blood into the subcutaneous fatty tissue to create artificial hematomas. MR images were acquired at 3T and four blinded observers conducted manual segmentation of the hematomas. To assess segmentability, the agreement of measured volume with the known volume of injected blood was statistically analyzed. A physically motivated normalization taking into account partial volume effect was applied to the data to ensure comparable results among differently sized hematomas. The inversion recovery sequence exhibited the best segmentability rate, whereas the T1T2w turbo spin echo sequence showed the most accurate results regarding volume estimation. Both sequences led to reproducible volume estimations. This study demonstrates that MRI is a promising forensic tool to assess and visualize even very small amounts of blood in soft tissue. The presented results enable the improvement of protocols for detection and volume determination of hemorrhage in forensically relevant cases and also provide fundamental knowledge for future in-vivo examinations.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Wicke, Jason; Dumas, Genevieve A
2010-02-01
The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).
1994-07-01
REQUIRED MIX OF SEGMENTS OR INDIVIDUAL DATA ELEMENTS TO BE EXTRACTED. IN SEGMENT R ON AN INTERROGATION TRANSACTION (LTI), DATA RECORD NUMBER (DRN 0950) ONLY...and zation and Marketing input DICs. insert the Continuation Indicator Code (DRN 8555) in position 80 of this record. Maximum of OF The assigned NSN...for Procurement KFR, File Data Minus Security Classified Characteristics Data KFC 8.5-2 DoD 4100.39-M Volume 8 CHAPTER 5 ALPHABETIC INDEX OF DIC
Unintended consequences of eliminating Medicare payments for consultations1
Song, Zirui; Ayanian, John Z.; Wallace, Jacob; He, Yulei; Gibson, Teresa B.; Chernew, Michael E.
2013-01-01
Background Prior to 2010, Medicare payments for consultations (commonly billed by specialists) were substantially higher than for office visits of similar complexity (commonly billed by primary care physicians). In January 2010, Medicare eliminated consultation payments from the Part B Physician Fee Schedule and increased fees for office visits. This change was intended to be budget neutral and to decrease payments to specialists while increasing payments to primary care physicians. We assessed the impact of this policy on spending, volume, and complexity for outpatient office encounters in 2010. Methods We examined 2007–2010 outpatient claims for 2,247,810 Medicare beneficiaries with Medicare Supplemental (Medigap) coverage through large employers in the Thomson Reuters MarketScan Database. We used segmented regression analysis to study changes in spending, volume, and complexity of office encounters adjusted for age, sex, health status, secular trends, seasonality, and hospital referral region. Results “New” office visits largely replaced consultations in 2010. An average of $10.20 (6.5 percent) more was spent per beneficiary per quarter on physician encounters after the policy. The total volume of physician encounters did not change significantly. The increase in spending was largely explained by higher office visit fees from the policy and a shift toward higher complexity visits to both specialists and primary care physicians. Conclusions The elimination of consultations led to a net increase in spending on visits to both primary care physicians and specialists. Higher prices, partially due to the subjectivity of codes in the physician fee schedule, explained the spending increase, rather than higher volumes. PMID:23336095
Kim, Hyungjin; Lee, Sang Min; Lee, Hyun-Ju; Goo, Jin Mo
2013-01-01
Objective To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. Materials and Methods In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. Results The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. Conclusion LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs. PMID:23901328
2D/3D fetal cardiac dataset segmentation using a deformable model.
Dindoyal, Irving; Lambrou, Tryphon; Deng, Jing; Todd-Pokropek, Andrew
2011-07-01
To segment the fetal heart in order to facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. The authors outline a level set deformable model to automatically delineate the small fetal cardiac chambers. The level set is penalized from growing into an adjacent cardiac compartment using a novel collision detection term. The region based model allows simultaneous segmentation of all four cardiac chambers from a user defined seed point placed in each chamber. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm's delineation and manual tracings are within 2 mm which is less than 10% of the length of a typical fetal heart. The ejection fractions were determined from the 3D datasets. We validate the algorithm using a physical phantom and obtain volumes that are comparable to those from physically determined means. The algorithm segments volumes with an error of within 13% as determined using a physical phantom. Our original work in fetal cardiac segmentation compares automatic and manual tracings to a physical phantom and also measures inter observer variation.
NASA Astrophysics Data System (ADS)
Wierts, R.; Jentzen, W.; Quick, H. H.; Wisselink, H. J.; Pooters, I. N. A.; Wildberger, J. E.; Herrmann, K.; Kemerink, G. J.; Backes, W. H.; Mottaghy, F. M.
2018-01-01
The aim was to investigate the quantitative performance of 124I PET/MRI for pre-therapy lesion dosimetry in differentiated thyroid cancer (DTC). Phantom measurements were performed on a PET/MRI system (Biograph mMR, Siemens Healthcare) using 124I and 18F. The PET calibration factor and the influence of radiofrequency coil attenuation were determined using a cylindrical phantom homogeneously filled with radioactivity. The calibration factor was 1.00 ± 0.02 for 18F and 0.88 ± 0.02 for 124I. Near the radiofrequency surface coil an underestimation of less than 5% in radioactivity concentration was observed. Soft-tissue sphere recovery coefficients were determined using the NEMA IEC body phantom. Recovery coefficients were systematically higher for 18F than for 124I. In addition, the six spheres of the phantom were segmented using a PET-based iterative segmentation algorithm. For all 124I measurements, the deviations in segmented lesion volume and mean radioactivity concentration relative to the actual values were smaller than 15% and 25%, respectively. The effect of MR-based attenuation correction (three- and four-segment µ-maps) on bone lesion quantification was assessed using radioactive spheres filled with a K2HPO4 solution mimicking bone lesions. The four-segment µ-map resulted in an underestimation of the imaged radioactivity concentration of up to 15%, whereas the three-segment µ-map resulted in an overestimation of up to 10%. For twenty lesions identified in six patients, a comparison of 124I PET/MRI to PET/CT was performed with respect to segmented lesion volume and radioactivity concentration. The interclass correlation coefficients showed excellent agreement in segmented lesion volume and radioactivity concentration (0.999 and 0.95, respectively). In conclusion, it is feasible that accurate quantitative 124I PET/MRI could be used to perform radioiodine pre-therapy lesion dosimetry in DTC.
Method of manufacturing a large-area segmented photovoltaic module
Lenox, Carl
2013-11-05
One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, C; Hrycushko, B; Jiang, S
2014-06-01
Purpose: To compare the radiobiological effect on large tumors and surrounding normal tissues from single fraction SRS, multi-fractionated SRT, and multi-staged SRS treatment. Methods: An anthropomorphic head phantom with a centrally located large volume target (18.2 cm{sup 3}) was scanned using a 16 slice large bore CT simulator. Scans were imported to the Multiplan treatment planning system where a total prescription dose of 20Gy was used for a single, three staged and three fractionated treatment. Cyber Knife treatment plans were inversely optimized for the target volume to achieve at least 95% coverage of the prescription dose. For the multistage plan,more » the target was segmented into three subtargets having similar volume and shape. Staged plans for individual subtargets were generated based on a planning technique where the beam MUs of the original plan on the total target volume are changed by weighting the MUs based on projected beam lengths within each subtarget. Dose matrices for each plan were export in DICOM format and used to calculate equivalent dose distributions in 2Gy fractions using an alpha beta ratio of 10 for the target and 3 for normal tissue. Results: Singe fraction SRS, multi-stage plan and multi-fractionated SRT plans had an average 2Gy dose equivalent to the target of 62.89Gy, 37.91Gy and 33.68Gy, respectively. The normal tissue within 12Gy physical dose region had an average 2Gy dose equivalent of 29.55Gy, 16.08Gy and 13.93Gy, respectively. Conclusion: The single fraction SRS plan had the largest predicted biological effect for the target and the surrounding normal tissue. The multi-stage treatment provided for a more potent biologically effect on target compared to the multi-fraction SRT treatments with less biological normal tissue than single-fraction SRS treatment.« less
Atalay, Hasan Anıl; Canat, Lutfi; Bayraktarlı, Recep; Alkan, Ilter; Can, Osman; Altunrende, Fatih
2017-06-23
We analyzed our stone-free rates of PNL with regard to stone burden and its ratio to the renal collecting system volume. Data of 164 patients who underwent PNL were analyzed retrospectively. Volume segmentation of renal collecting system and stones were done using 3D segmentation software with the images obtained from CT data. Analyzed stone volume (ASV) and renal collecting system volume (RCSV) were measured and the ASV-to-RCSV ratio was calculated after the creation of a 3D surface volume rendering of renal stones and the collecting system. Univariate and multivariate statistical analyses were performed to determine factors affecting stone-free rates; also we assessed the predictive accuracy of the ASV-to-RCSV ratio using the receiving operating curve (ROC) and AUC. The stone-free rate of PNL monotherapy was 53% (164 procedures).The ASV-to-RCSV ratio and calyx number with stones were the most influential predictors of stone-free status (OR 4.15, 95% CI 2.24-7.24, <0.001, OR 2.62, 95% CI 1.38-4.97, p < 0.001, respectively). Other factors associated with the stone-free rate were maximum stone size (p < 0.029), stone surface area (p < 0.010), and stone burden volume (p < 0.001). Predictive accuracy of the ASV-to-RCSV ratio was AUC 0.76. Stone burden volume distribution in the renal collecting system, which is calculated using the 3D volume segmentation method, is a significant determinant of the stone-free rate before PCNL surgery. It could be used as a single guide variable by the clinician before renal stone surgery to predict extra requirements for stone clearance.
Influence of stapling the intersegmental planes on lung volume and function after segmentectomy.
Tao, Hiroyuki; Tanaka, Toshiki; Hayashi, Tatsuro; Yoshida, Kumiko; Furukawa, Masashi; Yoshiyama, Koichi; Okabe, Kazunori
2016-10-01
Dividing the intersegmental planes with a stapler during pulmonary segmentectomy leads to volume loss in the remnant segment. The aim of this study was to assess the influence of segment division methods on preserved lung volume and pulmonary function after segmentectomy. Using image analysis software on computed tomography (CT) images of 41 patients, the ratio of remnant segment and ipsilateral lung volume to their preoperative values (R-seg and R-ips) was calculated. The ratio of postoperative actual forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC) per those predicted values based on three-dimensional volumetry (R-FEV1 and R-FVC) was also calculated. Differences in actual/predicted ratios of lung volume and pulmonary function for each of the division methods were analysed. We also investigated the correlations of the actual/predicted ratio of remnant lung volume with that of postoperative pulmonary function. The intersegmental planes were divided by either electrocautery or with a stapler in 22 patients and with a stapler alone in 19 patients. Mean values of R-seg and R-ips were 82.7 (37.9-140.2) and 104.9 (77.5-129.2)%, respectively. The mean values of R-FEV1 and R-FVC were 103.9 (83.7-135.1) and 103.4 (82.2-125.1)%, respectively. There were no correlations between the actual/predicted ratio of remnant lung volume and pulmonary function based on the division method. Both R-FEV1 and R-FVC were correlated not with R-seg, but with R-ips. Stapling does not lead to less preserved volume or function than electrocautery in the division of the intersegmental planes. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.
Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandularmore » tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0.92 for FGT% and r = 0.93 for |FGT|, and the automated segmentation is not statistically significantly different (p = 0.46 for FGT% and p = 0.55 for |FGT|). The bilateral correlation between left breasts and right breasts for the FGT% is 0.94, 0.92, and 0.95 for reader 1, reader 2, and the FCM-Atlas, respectively; likewise, for the |FGT|, it is 0.92, 0.92, and 0.93, respectively. For the spatial segmentation agreement, the automated algorithm achieves a DSC of 0.69 ± 0.1 when compared to reader 1 and 0.61 ± 0.1 for reader 2, respectively, while the DSC between the two readers’ manual segmentation is 0.67 ± 0.15. Additional robustness analysis shows that the segmentation performance of the authors' method is stable both with respect to selecting different cases and to varying the number of cases needed to construct the prior probability atlas. The authors' results also show that the proposed FCM-Atlas method outperforms the commonly used two-cluster FCM-alone method. The authors' method runs at ∼5 min for each 3D bilateral MR scan (56 slices) for computing the FGT% and |FGT|, compared to ∼55 min needed for manual segmentation for the same purpose. Conclusions: The authors' method achieves robust segmentation and can serve as an efficient tool for processing large clinical datasets for quantifying the fibroglandular tissue content in breast MRI. It holds a great potential to support clinical applications in the future including breast cancer risk assessment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.
2013-12-15
Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandularmore » tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0.92 for FGT% and r = 0.93 for |FGT|, and the automated segmentation is not statistically significantly different (p = 0.46 for FGT% and p = 0.55 for |FGT|). The bilateral correlation between left breasts and right breasts for the FGT% is 0.94, 0.92, and 0.95 for reader 1, reader 2, and the FCM-Atlas, respectively; likewise, for the |FGT|, it is 0.92, 0.92, and 0.93, respectively. For the spatial segmentation agreement, the automated algorithm achieves a DSC of 0.69 ± 0.1 when compared to reader 1 and 0.61 ± 0.1 for reader 2, respectively, while the DSC between the two readers’ manual segmentation is 0.67 ± 0.15. Additional robustness analysis shows that the segmentation performance of the authors' method is stable both with respect to selecting different cases and to varying the number of cases needed to construct the prior probability atlas. The authors' results also show that the proposed FCM-Atlas method outperforms the commonly used two-cluster FCM-alone method. The authors' method runs at ∼5 min for each 3D bilateral MR scan (56 slices) for computing the FGT% and |FGT|, compared to ∼55 min needed for manual segmentation for the same purpose. Conclusions: The authors' method achieves robust segmentation and can serve as an efficient tool for processing large clinical datasets for quantifying the fibroglandular tissue content in breast MRI. It holds a great potential to support clinical applications in the future including breast cancer risk assessment.« less
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Analyzing structural variations along strike in a deep-water thrust belt
NASA Astrophysics Data System (ADS)
Totake, Yukitsugu; Butler, Robert W. H.; Bond, Clare E.; Aziz, Aznan
2018-03-01
We characterize a deep-water fold-thrust arrays imaged by a high-resolution 3D seismic dataset in the offshore NW Borneo, Malaysia, to understand the kinematics behind spatial arrangement of structural variations throughout the fold-thrust system. The seismic volume used covers two sub-parallel fold trains associated with a series of fore-thrusts and back-thrusts. We measured fault heave, shortening value, fold geometries (forelimb dip, interlimb angle and crest depth) along strike in individual fold trains. Heave plot on strike projection allows to identify individual thrust segments showing semi-elliptical to triangular to bimodal patterns, and linkages of these segments. The linkage sites are marked by local minima in cumulative heave. These local heave minima are compensated by additional structures, such as small imbricate thrusts and tight folds indicated by large forelimb dip and small interlimb angle. Complementary profiles of the shortening amount for the two fold trains result in smoother gradient of total shortening across the structures. We interpret this reflects kinematic interaction between two fold-thrust trains. This type of along-strike variation analysis provides comprehensive understanding of a fold-thrust system and may provide an interpretative strategy for inferring the presence of complex multiple faults in less well-imaged parts of seismic volumes.
Renal cortex segmentation using optimal surface search with novel graph construction.
Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie
2011-01-01
In this paper, we propose a novel approach to solve the renal cortex segmentation problem, which has rarely been studied. In this study, the renal cortex segmentation problem is handled as a multiple-surfaces extraction problem, which is solved using the optimal surface search method. We propose a novel graph construction scheme in the optimal surface search to better accommodate multiple surfaces. Different surface sub-graphs are constructed according to their properties, and inter-surface relationships are also modeled in the graph. The proposed method was tested on 17 clinical CT datasets. The true positive volume fraction (TPVF) and false positive volume fraction (FPVF) are 74.10% and 0.08%, respectively. The experimental results demonstrate the effectiveness of the proposed method.
Volume estimation of brain abnormalities in MRI data
NASA Astrophysics Data System (ADS)
Suprijadi, Pratama, S. H.; Haryanto, F.
2014-02-01
The abnormality of brain tissue always becomes a crucial issue in medical field. This medical condition can be recognized through segmentation of certain region from medical images obtained from MRI dataset. Image processing is one of computational methods which very helpful to analyze the MRI data. In this study, combination of segmentation and rendering image were used to isolate tumor and stroke. Two methods of thresholding were employed to segment the abnormality occurrence, followed by filtering to reduce non-abnormality area. Each MRI image is labeled and then used for volume estimations of tumor and stroke-attacked area. The algorithms are shown to be successful in isolating tumor and stroke in MRI images, based on thresholding parameter and stated detection accuracy.
GPU accelerated fuzzy connected image segmentation by using CUDA.
Zhuge, Ying; Cao, Yong; Miller, Robert W
2009-01-01
Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.
Xiang, T X
1993-01-01
A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390
Breast reduction with short L scar.
Bozola, A R
1990-05-01
I didactically compared the breast as a glandular cone with an envelope of skin and subcutaneous tissue. The aesthetic alterations of the breast are classified in four groups related to form, to volume, to grams, and to ptosis in centimeters. An imaginary plane that passes by the mammary sulcus (plane A) will determine the area of the breast that is ptotic. The projection of this plane in the anterior part of the breast is called point A. The distance between point A and the nipple will give in centimeters the amount of ptosis. I use this distance to draw geometrically in the breast the amount of excess of skin to be removed to correct the ptosis. In group I, the volume is normal and part of the mammary gland is under plane A. In this type of breast, the skin is resected, and since there is no excess of breast tissue, the breast that is under plane A is used as an inferior pedicle flap to give a better volume to the new breast. In group II, the base of the breast is large, the height is normal, and the volume is increased by the enlargement of the base. In this type of breast, the excess of breast under plane A and a wedge under the nipple are resected to reach the normal volume at the end of the surgery. In group III, the base is normal and the volume of the breast is increased by the height. For treatment, I resect the excess of breast under plane A as well as a segment at the base to reduce its height. In group IV, the volume of the breast is increased by the size of the base and the height of the cone, and I treat by resection of the excess of tissue under the ptotic area, a wedge under the areola, and a transversal segment in the base to reduce all the dimensions. In the final result of this technique in the majority of patients I will obtain a short scar. This technique was used in 1083 patients from January of 1979 to May of 1988.
NASA Astrophysics Data System (ADS)
Bolan, B. A.; Soles, C. L.; Hristov, H. A.; Gidley, D. W.; Yee, A. F.
1996-03-01
A new method is proposed for the evaluation of the hole volume in amorphous polymers based upon PALS data measured over a temperature of 110 to 480 K. Extrapolation of the "open hole" volume to 0 K allows its separation into that attributed to the segmental motions of the polymer chains (dynamic) and that due to inefficient packing (static). The dynamic hole volume is correlated to thermodynamic volume/density fluctuations and its temperature dependencies are in good agreement with SAXS data. Several thermosetting epoxy materials are also studied over a similar temperature range with the "open hole" volume being separated into its dynamic and static components. How these two components affect diffusional properties of these systems is examined in detail. It is also shown that the o-Ps can localize in a nearly 100material (PET), we therefore conclude that PALS measures more than the "free volume" necessary for segmental motion. Work supported by the Air Force Office of Scientific Research (AFOSR) grant # F49620-95-1-0037.
Automated tissue segmentation of MR brain images in the presence of white matter lesions.
Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier
2017-01-01
Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.
Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-01-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540
Brain segmentation and the generation of cortical surfaces
NASA Technical Reports Server (NTRS)
Joshi, M.; Cui, J.; Doolittle, K.; Joshi, S.; Van Essen, D.; Wang, L.; Miller, M. I.
1999-01-01
This paper describes methods for white matter segmentation in brain images and the generation of cortical surfaces from the segmentations. We have developed a system that allows a user to start with a brain volume, obtained by modalities such as MRI or cryosection, and constructs a complete digital representation of the cortical surface. The methodology consists of three basic components: local parametric modeling and Bayesian segmentation; surface generation and local quadratic coordinate fitting; and surface editing. Segmentations are computed by parametrically fitting known density functions to the histogram of the image using the expectation maximization algorithm [DLR77]. The parametric fits are obtained locally rather than globally over the whole volume to overcome local variations in gray levels. To represent the boundary of the gray and white matter we use triangulated meshes generated using isosurface generation algorithms [GH95]. A complete system of local parametric quadratic charts [JWM+95] is superimposed on the triangulated graph to facilitate smoothing and geodesic curve tracking. Algorithms for surface editing include extraction of the largest closed surface. Results for several macaque brains are presented comparing automated and hand surface generation. Copyright 1999 Academic Press.
Engine Hydraulic Stability. [injector model for analyzing combustion instability
NASA Technical Reports Server (NTRS)
Kesselring, R. C.; Sprouse, K. M.
1977-01-01
An analytical injector model was developed specifically to analyze combustion instability coupling between the injector hydraulics and the combustion process. This digital computer dynamic injector model will, for any imposed chamber of inlet pressure profile with a frequency ranging from 100 to 3000 Hz (minimum) accurately predict/calculate the instantaneous injector flowrates. The injector system is described in terms of which flow segments enter and leave each pressure node. For each flow segment, a resistance, line lengths, and areas are required as inputs (the line lengths and areas are used in determining inertance). For each pressure node, volume and acoustic velocity are required as inputs (volume and acoustic velocity determine capacitance). The geometric criteria for determining inertances of flow segments and capacitance of pressure nodes was set. Also, a technique was developed for analytically determining time averaged steady-state pressure drops and flowrates for every flow segment in an injector when such data is not known. These pressure drops and flowrates are then used in determining the linearized flow resistance for each line segment of flow.
Cerebellar Volume in Children With Attention-Deficit Hyperactivity Disorder (ADHD).
Wyciszkiewicz, Aleksandra; Pawlak, Mikolaj A; Krawiec, Krzysztof
2017-02-01
Attention Deficit Hyperactivity Disorder (ADHD) is associated with altered cerebellar volume and cerebellum is associated with cognitive performance. However there are mixed results regarding the cerebellar volume in young patients with ADHD. To clarify the size and direction of this effect, we conducted the analysis on the large public database of brain images. The aim of this study was to confirm that cerebellar volume in ADHD is smaller than in control subjects in currently the largest publicly available cohort of ADHD subjects.We applied cross-sectional case control study design by comparing 286 ADHD patients (61 female) with age and gender matched control subjects. Volumetric measurements of cerebellum were obtained using automated segmentation with FreeSurfer 5.1. Statistical analysis was performed in R-CRAN statistical environment. Patients with ADHD had significantly smaller total cerebellar volumes (134.5±17.11cm 3 vs.138.90±15.32 cm 3 ). The effect was present in both females and males (males 136.9±14.37 cm 3 vs. 141.20±14.75 cm 3 ; females 125.7±12.34 cm 3 vs. 131.20±15.03 cm 3 ). Age was positively and significantly associated with the cerebellar volumes. These results indicate either delayed or disrupted cerebellar development possibly contributing to ADHD pathophysiology.
Souto Bayarri, M; Masip Capdevila, L; Remuiñan Pereira, C; Suárez-Cuenca, J J; Martínez Monzonís, A; Couto Pérez, M I; Carreira Villamor, J M
2015-01-01
To compare the methods of right ventricle segmentation in the short-axis and 4-chamber planes in cardiac magnetic resonance imaging and to correlate the findings with those of the tricuspid annular plane systolic excursion (TAPSE) method in echocardiography. We used a 1.5T MRI scanner to study 26 patients with diverse cardiovascular diseases. In all MRI studies, we obtained cine-mode images from the base to the apex in both the short-axis and 4-chamber planes using steady-state free precession sequences and 6mm thick slices. In all patients, we quantified the end-diastolic volume, end-systolic volume, and the ejection fraction of the right ventricle. On the same day as the cardiac magnetic resonance imaging study, 14 patients also underwent echocardiography with TAPSE calculation of right ventricular function. No statistically significant differences were found in the volumes and function of the right ventricle calculated using the 2 segmentation methods. The correlation between the volume estimations by the two segmentation methods was excellent (r=0,95); the correlation for the ejection fraction was slightly lower (r=0,8). The correlation between the cardiac magnetic resonance imaging estimate of right ventricular ejection fraction and TAPSE was very low (r=0,2, P<.01). Both ventricular segmentation methods quantify right ventricular function adequately. The correlation with the echocardiographic method is low. Copyright © 2012 SERAM. Published by Elsevier España, S.L.U. All rights reserved.
de Hoop, Bartjan; Gietema, Hester; van Ginneken, Bram; Zanen, Pieter; Groenewegen, Gerard; Prokop, Mathias
2009-04-01
We compared interexamination variability of CT lung nodule volumetry with six currently available semi-automated software packages to determine the minimum change needed to detect the growth of solid lung nodules. We had ethics committee approval. To simulate a follow-up examination with zero growth, we performed two low-dose unenhanced CT scans in 20 patients referred for pulmonary metastases. Between examinations, patients got off and on the table. Volumes of all pulmonary nodules were determined on both examinations using six nodule evaluation software packages. Variability (upper limit of the 95% confidence interval of the Bland-Altman plot) was calculated for nodules for which segmentation was visually rated as adequate. We evaluated 214 nodules (mean diameter 10.9 mm, range 3.3 mm-30.0 mm). Software packages provided adequate segmentation in 71% to 86% of nodules (p < 0.001). In case of adequate segmentation, variability in volumetry between scans ranged from 16.4% to 22.3% for the various software packages. Variability with five to six software packages was significantly less for nodules >or=8 mm in diameter (range 12.9%-17.1%) than for nodules <8 mm (range 18.5%-25.6%). Segmented volumes of each package were compared to each of the other packages. Systematic volume differences were detected in 11/15 comparisons. This hampers comparison of nodule volumes between software packages.
NASA Astrophysics Data System (ADS)
Kobayashi, Yusuke; Watanabe, Teiji
2017-04-01
This study has three objectives: (1) to estimate changes of the eroded volume of mountain trails from 2014 to 2016 by making DSMs, (2) to understand a relationship between the trail erosion and micro-topography, and (3) to predict the volume of soil that can be eroded in future. Trail erosion has been investigated near Mt. Hokkai-dake in Daisetzuzan National Park, Hokkaido, northern Japan, with a drone (UAV) from 2014 to 2016. Seven segments with the soil erosion from starting sites to ending sites were selected to make DSMs and Orthophotographs by Agisoft, which is one of the Structure from Motion (SfM) software. Then, at fourteen points in each of the seven segments were selected to estimate the volume of soil that can be eroded in the future by PANDA2, a soil compaction penetrometer. The eroded volume in the segment with the largest eroded value attained 274.67 m3 for the two-year period although extremely heavy rain hit this area in the 2016 summer. The result obtained by PANDA2 shows that soil more than 100 cm in depth will be potentially eroded at four points in three years to one hundred years.
Liu, Ting; Maurovich-Horvat, Pál; Mayrhofer, Thomas; Puchner, Stefan B; Lu, Michael T; Ghemigian, Khristine; Kitslaar, Pieter H; Broersen, Alexander; Pursnani, Amit; Hoffmann, Udo; Ferencik, Maros
2018-02-01
Semi-automated software can provide quantitative assessment of atherosclerotic plaques on coronary CT angiography (CTA). The relationship between established qualitative high-risk plaque features and quantitative plaque measurements has not been studied. We analyzed the association between quantitative plaque measurements and qualitative high-risk plaque features on coronary CTA. We included 260 patients with plaque who underwent coronary CTA in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT) II trial. Quantitative plaque assessment and qualitative plaque characterization were performed on a per coronary segment basis. Quantitative coronary plaque measurements included plaque volume, plaque burden, remodeling index, and diameter stenosis. In qualitative analysis, high-risk plaque was present if positive remodeling, low CT attenuation plaque, napkin-ring sign or spotty calcium were detected. Univariable and multivariable logistic regression analyses were performed to assess the association between quantitative and qualitative high-risk plaque assessment. Among 888 segments with coronary plaque, high-risk plaque was present in 391 (44.0%) segments by qualitative analysis. In quantitative analysis, segments with high-risk plaque had higher total plaque volume, low CT attenuation plaque volume, plaque burden and remodeling index. Quantitatively assessed low CT attenuation plaque volume (odds ratio 1.12 per 1 mm 3 , 95% CI 1.04-1.21), positive remodeling (odds ratio 1.25 per 0.1, 95% CI 1.10-1.41) and plaque burden (odds ratio 1.53 per 0.1, 95% CI 1.08-2.16) were associated with high-risk plaque. Quantitative coronary plaque characteristics (low CT attenuation plaque volume, positive remodeling and plaque burden) measured by semi-automated software correlated with qualitative assessment of high-risk plaque features.
Differential effects of lower body negative pressure and upright tilt on splanchnic blood volume
Taneja, Indu; Moran, Christopher; Medow, Marvin S.; Glover, June L.; Montgomery, Leslie D.; Stewart, Julian M.
2015-01-01
Upright posture and lower body negative pressure (LBNP) both induce reductions in central blood volume. However, regional circulatory responses to postural changes and LBNP may differ. Therefore, we studied regional blood flow and blood volume changes in 10 healthy subjects undergoing graded lower-body negative pressure (−10 to −50 mmHg) and 8 subjects undergoing incremental head-up tilt (HUT; 20°, 40°, and 70°) on separate days. We continuously measured blood pressure (BP), heart rate, and regional blood volumes and blood flows in the thoracic, splanchnic, pelvic, and leg segments by impedance plethysmography and calculated regional arterial resistances. Neither LBNP nor HUT altered systolic BP, whereas pulse pressure decreased significantly. Blood flow decreased in all segments, whereas peripheral resistances uniformly and significantly increased with both HUT and LBNP. Thoracic volume decreased while pelvic and leg volumes increased with HUT and LBNP. However, splanchnic volume changes were directionally opposite with stepwise decreases in splanchnic volume with LBNP and stepwise increases in splanchnic volume during HUT. Splanchnic emptying in LBNP models regional vascular changes during hemorrhage. Splanchnic filling may limit the ability of the splanchnic bed to respond to thoracic hypovolemia during upright posture. PMID:17085534
Moving metal artifact reduction in cone-beam CT scans with implanted cylindrical gold markers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toftegaard, Jakob, E-mail: jaktofte@rm.dk; Fledelius, Walther; Worm, Esben S.
2014-12-15
Purpose: Implanted gold markers for image-guided radiotherapy lead to streaking artifacts in cone-beam CT (CBCT) scans. Several methods for metal artifact reduction (MAR) have been published, but they all fail in scans with large motion. Here the authors propose and investigate a method for automatic moving metal artifact reduction (MMAR) in CBCT scans with cylindrical gold markers. Methods: The MMAR CBCT reconstruction method has six steps. (1) Automatic segmentation of the cylindrical markers in the CBCT projections. (2) Removal of each marker in the projections by replacing the pixels within a masked area with interpolated values. (3) Reconstruction of amore » marker-free CBCT volume from the manipulated CBCT projections. (4) Reconstruction of a standard CBCT volume with metal artifacts from the original CBCT projections. (5) Estimation of the three-dimensional (3D) trajectory during CBCT acquisition for each marker based on the segmentation in Step 1, and identification of the smallest ellipsoidal volume that encompasses 95% of the visited 3D positions. (6) Generation of the final MMAR CBCT reconstruction from the marker-free CBCT volume of Step 3 by replacing the voxels in the 95% ellipsoid with the corresponding voxels of the standard CBCT volume of Step 4. The MMAR reconstruction was performed retrospectively using a half-fan CBCT scan for 29 consecutive stereotactic body radiation therapy patients with 2–3 gold markers implanted in the liver. The metal artifacts of the MMAR reconstructions were scored and compared with a standard MAR reconstruction by counting the streaks and by calculating the standard deviation of the Hounsfield units in a region around each marker. Results: The markers were found with the same autosegmentation settings in 27 CBCT scans, while two scans needed slightly changed settings to find all markers automatically in Step 1 of the MMAR method. MMAR resulted in 15 scans with no streaking artifacts, 11 scans with 1–4 streaks, and 3 scans with severe streaking artifacts. The corresponding numbers for MAR were 8 (no streaks), 1 (1–4 streaks), and 20 (severe streaking artifacts). The MMAR method was superior to MAR in scans with more than 8 mm 3D marker motion and comparable to MAR for scans with less than 8 mm motion. In addition, the MMAR method was tested on a 4D CBCT reconstruction for which it worked equally well as for the 3D case. The markers in the 4D case had very low motion blur. Conclusions: An automatic method for MMAR in CBCT scans was proposed and shown to effectively remove almost all streaking artifacts in a large set of clinical CBCT scans with implanted gold markers in the liver. Residual streaking artifacts observed in three CBCT scans may be removed with better marker segmentation.« less
Afshar, Yaser; Sbalzarini, Ivo F.
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144
Afshar, Yaser; Sbalzarini, Ivo F
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.
SLS launched missions concept studies for LUVOIR mission
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; Hopkins, Randall C.
2015-09-01
NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. The multi-center ATLAST Team is working to meet these needs. The MSFC Team is examining potential concepts that leverage the advantages of the SLS (Space Launch System). A key challenge is how to affordably get a large telescope into space. The JWST design was severely constrained by the mass and volume capacities of its launch vehicle. This problem is solved by using an SLS Block II-B rocket with its 10-m diameter x 30-m tall fairing and estimated 45 mt payload to SE-L2. Previously, two development study cycles produced a detailed concept called ATLAST-8. Using ATLAST-8 as a point of departure, this paper reports on a new ATLAST-12 concept. ATLAST-12 is a 12-m class segmented aperture LUVOIR with an 8-m class center segment. Thus, ATLAST-8 is now a de-scope option.
SLS Launched Missions Concept Studies for LUVOIR Mission
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hopkins, Randall C.
2015-01-01
NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-meter Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-meter class High-Definition Space Telescope to pursue transformational scientific discoveries. The multi-center ATLAST Team is working to meet these needs. The MSFC Team is examining potential concepts that leverage the advantages of the SLS (Space Launch System). A key challenge is how to affordably get a large telescope into space. The JWST design was severely constrained by the mass and volume capacities of its launch vehicle. This problem is solved by using an SLS Block II-B rocket with its 10-m diameter x 30-m tall fairing and 45 mt payload to SE-L2. Previously, two development study cycles produced a detailed concept called ATLAST-8. Using ATLAST-8 as a point of departure, this paper reports on a new ATLAST-12 concept. ATLAST-12 is a 12-meter class segmented aperture LUVOIR with an 8-m class center segment. Thus, ATLAST-8 is now a de-scope option.
Volumetric Assessment of Swallowing Muscles: A Comparison of CT and MRI Segmentation.
Sporns, Kim Barbara; Hanning, Uta; Schmidt, Rene; Muhle, Paul; Wirth, Rainer; Zimmer, Sebastian; Dziewas, Rainer; Suntrup-Krueger, Sonja; Sporns, Peter Bernhard; Heindel, Walter; Schwindt, Wolfram
2018-05-01
Recent retrospective studies have proposed a high correlation between atrophy of swallowing muscles, age, severity of dysphagia and aspiration status based on computed tomography (CT). However, ionizing radiation poses an ethical barrier to research in prospective non-patient populations. Hence, there is a need to prove the efficacy of techniques that rely on noninvasive methods and produce high-resolution soft tissue images such as magnetic resonance imaging (MRI). The objective of this study was therefore to compare the segmentation results of swallowing muscles using CT and MRI. Retrospective study of 21 patients (median age: 46.6; gender: 11 female) who underwent CT and MRI of the head and neck region within a time frame of less than 50 days because of suspected head and neck cancer using contrast agent. CT and MR images were segmented by two blinded readers using Medical Imaging Toolkit (MITK) and both modalities were tested (with the equivalence test) regarding the segmented muscle volumes. Adjustment for multiple testing was performed using the Bonferroni test and the potential time effect of the muscle volumes and the time interval between the modalities was assessed by a spearman correlation. The study was approved by the local ethics committee. The median volumes for each muscle belly of the digastric muscle derived from CT were 3051 mm 3 (left) and 2969 mm 3 (right), and from MRI they were 3218 mm 3 (left) and 3027 mm 3 (right). The median volume of the geniohyoid muscle was 6580 mm 3 on CT and 6648 mm 3 on MRI. The interrater reliability was high for all segmented muscles. The mean time interval between the CT and MRI examinations was 34 days (IQR 25; 41). The muscle differences of each muscle between the two modalities did not reveal significant correlation to the time interval between the examinations (digastric left r = 0.003 and digastric right r = -0.008; geniohyoid muscle r = 0.075). CT-based segmentation and MRI-based segmentation of the digastric and geniohyoid muscle are equally feasible. The potential advantage of MRI for prospective studies is the absence of ionizing radiation. · CT-based segmentation and MRI-based segmentation of the swallowing muscles are equally feasible.. · The advantage of MRI is the absence of ionizing radiation.. · MRI should therefore be deployed for future prospective studies.. · Sporns KB, Hanning U, Schmidt R et al. Volumetric Assessment of Swallowing Muscles: A Comparison of CT and MRI Segmentation. Fortschr Röntgenstr 2018; 190: 441 - 446. © Georg Thieme Verlag KG Stuttgart · New York.
Ciernik, I Frank; Brown, Derek W; Schmid, Daniel; Hany, Thomas; Egli, Peter; Davis, J Bernard
2007-02-01
Volumetric assessment of PET signals becomes increasingly relevant for radiotherapy (RT) planning. Here, we investigate the utility of 18F-choline PET signals to serve as a structure for semi-automatic segmentation for forward treatment planning of prostate cancer. 18F-choline PET and CT scans of ten patients with histologically proven prostate cancer without extracapsular growth were acquired using a combined PET/CT scanner. Target volumes were manually delineated on CT images using standard software. Volumes were also obtained from 18F-choline PET images using an asymmetrical segmentation algorithm. PTVs were derived from CT 18F-choline PET based clinical target volumes (CTVs) by automatic expansion and comparative planning was performed. As a read-out for dose given to non-target structures, dose to the rectal wall was assessed. Planning target volumes (PTVs) derived from CT and 18F-choline PET yielded comparable results. Optimal matching of CT and 18F-choline PET derived volumes in the lateral and cranial-caudal directions was obtained using a background-subtracted signal thresholds of 23.0+/-2.6%. In antero-posterior direction, where adaptation compensating for rectal signal overflow was required, optimal matching was achieved with a threshold of 49.5+/-4.6%. 3D-conformal planning with CT or 18F-choline PET resulted in comparable doses to the rectal wall. Choline PET signals of the prostate provide adequate spatial information amendable to standardized asymmetrical region growing algorithms for PET-based target volume definition for external beam RT.
Feasibility and Acute Toxicity of Hypofractionated Radiation in Large-breasted Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorn, Paige L., E-mail: pdorn@radonc.uchicago.edu; Corbin, Kimberly S.; Al-Hallaq, Hania
Purpose: To determine the feasibility of and acute toxicity associated with hypofractionated whole breast radiation (HypoRT) after breast-conserving surgery in patients excluded from or underrepresented in randomized trials comparing HypoRT with conventional fractionation schedules. Methods and Materials: A review was conducted of all patients consecutively treated with HypoRT at University of Chicago. All patients were treated to 42.56 Gy in 2.66 Gy daily fractions in either the prone or supine position. Planning was performed in most cases using wedges and large segments or a 'field-in-field' technique. Breast volume was estimated using volumetric measurements of the planning target volume (PTV). Dosimetricmore » parameters of heterogeneity (V105, V107, V110, and maximum dose) were recorded for each treatment plan. Acute toxicity was scored for each treated breast. Results: Between 2006 and 2010, 78 patients were treated to 80 breasts using HypoRT. Most women were overweight or obese (78.7%), with a median body mass index of 29.2 kg/m{sup 2}. Median breast volume was 1,351 mL. Of the 80 treated breasts, the maximum acute skin toxicity was mild erythema or hyperpigmentation in 70.0% (56/80), dry desquamation in 21.25% (17/80), and focal moist desquamation in 8.75% (7/80). Maximum acute toxicity occurred after the completion of radiation in 31.9% of patients. Separation >25 cm was not associated with increased toxicity. Breast volume was the only patient factor significantly associated with moist desquamation on multivariable analysis (p = 0.01). Patients with breast volume >2,500 mL experienced focal moist desquamation in 27.2% of cases compared with 6.34% in patients with breast volume <2,500 mL (p = 0.03). Conclusions: HypoRT is feasible and safe in patients with separation >25 cm and in patients with large breast volume when employing modern planning and positioning techniques. We recommend counseling regarding expected increases in skin toxicity in women with a PTV volume >2,500 mL.« less
Novel multimodality segmentation using level sets and Jensen-Rényi divergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva
2013-12-15
Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less
Novel multimodality segmentation using level sets and Jensen-Rényi divergence.
Markel, Daniel; Zaidi, Habib; El Naqa, Issam
2013-12-01
Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.
Automated aortic calcification detection in low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.
Schinagl, Dominic A X; Vogel, Wouter V; Hoffmann, Aswin L; van Dalen, Jorn A; Oyen, Wim J; Kaanders, Johannes H A M
2007-11-15
Target-volume delineation for radiation treatment to the head and neck area traditionally is based on physical examination, computed tomography (CT), and magnetic resonance imaging. Additional molecular imaging with (18)F-fluoro-deoxy-glucose (FDG)-positron emission tomography (PET) may improve definition of the gross tumor volume (GTV). In this study, five methods for tumor delineation on FDG-PET are compared with CT-based delineation. Seventy-eight patients with Stages II-IV squamous cell carcinoma of the head and neck area underwent coregistered CT and FDG-PET. The primary tumor was delineated on CT, and five PET-based GTVs were obtained: visual interpretation, applying an isocontour of a standardized uptake value of 2.5, using a fixed threshold of 40% and 50% of the maximum signal intensity, and applying an adaptive threshold based on the signal-to-background ratio. Absolute GTV volumes were compared, and overlap analyses were performed. The GTV method of applying an isocontour of a standardized uptake value of 2.5 failed to provide successful delineation in 45% of cases. For the other PET delineation methods, volume and shape of the GTV were influenced heavily by the choice of segmentation tool. On average, all threshold-based PET-GTVs were smaller than on CT. Nevertheless, PET frequently detected significant tumor extension outside the GTV delineated on CT (15-34% of PET volume). The choice of segmentation tool for target-volume definition of head and neck cancer based on FDG-PET images is not trivial because it influences both volume and shape of the resulting GTV. With adequate delineation, PET may add significantly to CT- and physical examination-based GTV definition.
NASA Technical Reports Server (NTRS)
1980-01-01
A rolltrusion process was developed for forming of a hybrid, single-ply woven graphite and glass fiber cloth, impregnated with a polysulfone resin and coated with TI02 pigmented P-1700 resin into strips for the on-orbit fabrication of triangular truss segments. Ultrasonic welding in vacuum showed no identifiable effects on weld strength or resin flow characteristics. An existing bench model cap roll forming machine was modified and used to roll form caps for the prototype test truss and for column test specimens in order to test local buckling and torsional instability characteristics.
Increasing the speed of medical image processing in MatLab®
Bister, M; Yap, CS; Ng, KH; Tok, CH
2007-01-01
MatLab® has often been considered an excellent environment for fast algorithm development but is generally perceived as slow and hence not fit for routine medical image processing, where large data sets are now available e.g., high-resolution CT image sets with typically hundreds of 512x512 slices. Yet, with proper programming practices – vectorization, pre-allocation and specialization – applications in MatLab® can run as fast as in C language. In this article, this point is illustrated with fast implementations of bilinear interpolation, watershed segmentation and volume rendering. PMID:21614269
Hierarchical imaging: a new concept for targeted imaging of large volumes from cells to tissues.
Wacker, Irene; Spomer, Waldemar; Hofmann, Andreas; Thaler, Marlene; Hillmer, Stefan; Gengenbach, Ulrich; Schröder, Rasmus R
2016-12-12
Imaging large volumes such as entire cells or small model organisms at nanoscale resolution seemed an unrealistic, rather tedious task so far. Now, technical advances have lead to several electron microscopy (EM) large volume imaging techniques. One is array tomography, where ribbons of ultrathin serial sections are deposited on solid substrates like silicon wafers or glass coverslips. To ensure reliable retrieval of multiple ribbons from the boat of a diamond knife we introduce a substrate holder with 7 axes of translation or rotation specifically designed for that purpose. With this device we are able to deposit hundreds of sections in an ordered way in an area of 22 × 22 mm, the size of a coverslip. Imaging such arrays in a standard wide field fluorescence microscope produces reconstructions with 200 nm lateral resolution and 100 nm (the section thickness) resolution in z. By hierarchical imaging cascades in the scanning electron microscope (SEM), using a new software platform, we can address volumes from single cells to complete organs. In our first example, a cell population isolated from zebrafish spleen, we characterize different cell types according to their organelle inventory by segmenting 3D reconstructions of complete cells imaged with nanoscale resolution. In addition, by screening large numbers of cells at decreased resolution we can define the percentage at which different cell types are present in our preparation. With the second example, the root tip of cress, we illustrate how combining information from intermediate resolution data with high resolution data from selected regions of interest can drastically reduce the amount of data that has to be recorded. By imaging only the interesting parts of a sample considerably less data need to be stored, handled and eventually analysed. Our custom-designed substrate holder allows reproducible generation of section libraries, which can then be imaged in a hierarchical way. We demonstrate, that EM volume data at different levels of resolution can yield comprehensive information, including statistics, morphology and organization of cells and tissue. We predict, that hierarchical imaging will be a first step in tackling the big data issue inevitably connected with volume EM.
Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615
Evaluation metrics for bone segmentation in ultrasound
NASA Astrophysics Data System (ADS)
Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas
2015-03-01
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
Combining watershed and graph cuts methods to segment organs at risk in radiotherapy
NASA Astrophysics Data System (ADS)
Dolz, Jose; Kirisli, Hortense A.; Viard, Romain; Massoptier, Laurent
2014-03-01
Computer-aided segmentation of anatomical structures in medical images is a valuable tool for efficient radiation therapy planning (RTP). As delineation errors highly affect the radiation oncology treatment, it is crucial to delineate geometric structures accurately. In this paper, a semi-automatic segmentation approach for computed tomography (CT) images, based on watershed and graph-cuts methods, is presented. The watershed pre-segmentation groups small areas of similar intensities in homogeneous labels, which are subsequently used as input for the graph-cuts algorithm. This methodology does not require of prior knowledge of the structure to be segmented; even so, it performs well with complex shapes and low intensity. The presented method also allows the user to add foreground and background strokes in any of the three standard orthogonal views - axial, sagittal or coronal - making the interaction with the algorithm easy and fast. Hence, the segmentation information is propagated within the whole volume, providing a spatially coherent result. The proposed algorithm has been evaluated using 9 CT volumes, by comparing its segmentation performance over several organs - lungs, liver, spleen, heart and aorta - to those of manual delineation from experts. A Dicés coefficient higher than 0.89 was achieved in every case. That demonstrates that the proposed approach works well for all the anatomical structures analyzed. Due to the quality of the results, the introduction of the proposed approach in the RTP process will be a helpful tool for organs at risk (OARs) segmentation.
Wang, Jieqiong; Miao, Wen; Li, Jing; Li, Meng; Zhen, Zonglei; Sabel, Bernhard; Xian, Junfang; He, Huiguang
2015-11-30
The lateral geniculate nucleus (LGN) is a key relay center of the visual system. Because the LGN morphology is affected by different diseases, it is of interest to analyze its morphology by segmentation. However, existing LGN segmentation methods are non-automatic, inefficient and prone to experimenters' bias. To address these problems, we proposed an automatic LGN segmentation algorithm based on T1-weighted imaging. First, the prior information of LGN was used to create a prior mask. Then region growing was applied to delineate LGN. We evaluated this automatic LGN segmentation method by (1) comparison with manually segmented LGN, (2) anatomically locating LGN in the visual system via LGN-based tractography, (3) application to control and glaucoma patients. The similarity coefficients of automatic segmented LGN and manually segmented one are 0.72 (0.06) for the left LGN and 0.77 (0.07) for the right LGN. LGN-based tractography shows the subcortical pathway seeding from LGN passes the optic tract and also reaches V1 through the optic radiation, which is consistent with the LGN location in the visual system. In addition, LGN asymmetry as well as LGN atrophy along with age is observed in normal controls. The investigation of glaucoma effects on LGN volumes demonstrates that the bilateral LGN volumes shrink in patients. The automatic LGN segmentation is objective, efficient, valid and applicable. Experiment results proved the validity and applicability of the algorithm. Our method will speed up the research on visual system and greatly enhance studies of different vision-related diseases. Copyright © 2015 Elsevier B.V. All rights reserved.
Line segment extraction for large scale unorganized point clouds
NASA Astrophysics Data System (ADS)
Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan
2015-04-01
Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.
[Automated detection and volumetric segmentation of the spleen in CT scans].
Hammon, M; Dankerl, P; Kramer, M; Seifert, S; Tsymbal, A; Costa, M J; Janka, R; Uder, M; Cavallaro, A
2012-08-01
To introduce automated detection and volumetric segmentation of the spleen in spiral CT scans with the THESEUS-MEDICO software. The consistency between automated volumetry (aV), estimated volume determination (eV) and manual volume segmentation (mV) was evaluated. Retrospective evaluation of the CAD system based on methods like "marginal space learning" and "boosting algorithms". 3 consecutive spiral CT scans (thoraco-abdominal; portal-venous contrast agent phase; 1 or 5 mm slice thickness) of 15 consecutive lymphoma patients were included. The eV: 30 cm³ + 0.58 (width × length × thickness of the spleen) and the mV as the reference standard were determined by an experienced radiologist. The aV could be performed in all CT scans within 15.2 (± 2.4) seconds. The average splenic volume measured by aV was 268.21 ± 114.67 cm³ compared to 281.58 ± 130.21 cm³ in mV and 268.93 ± 104.60 cm³ in eV. The correlation coefficient was 0.99 (coefficient of determination (R²) = 0.98) for aV and mV, 0.91 (R² = 0.83) for mV and eV and 0.91 (R² = 0.82) for aV and eV. There was an almost perfect correlation of the changes in splenic volume measured with the new aV and mV (0.92; R² = 0.84), mV and eV (0.95; R² = 0.91) and aV and eV (0.83; R² = 0.69) between two time points. The automated detection and volumetric segmentation software rapidly provides an accurate measurement of the splenic volume in CT scans. Knowledge about splenic volume and its change between two examinations provides valuable clinical information without effort for the radiologist. © Georg Thieme Verlag KG Stuttgart · New York.
Liu, Hon-Man; Chen, Shan-Kai; Chen, Ya-Fang; Lee, Chung-Wei; Yeh, Lee-Ren
2016-01-01
Purpose To assess the inter session reproducibility of automatic segmented MRI-derived measures by FreeSurfer in a group of subjects with normal-appearing MR images. Materials and Methods After retrospectively reviewing a brain MRI database from our institute consisting of 14,758 adults, those subjects who had repeat scans and had no history of neurodegenerative disorders were selected for morphometry analysis using FreeSurfer. A total of 34 subjects were grouped by MRI scanner model. After automatic segmentation using FreeSurfer, label-wise comparison (involving area, thickness, and volume) was performed on all segmented results. An intraclass correlation coefficient was used to estimate the agreement between sessions. Wilcoxon signed rank test was used to assess the population mean rank differences across sessions. Mean-difference analysis was used to evaluate the difference intervals across scanners. Absolute percent difference was used to estimate the reproducibility errors across the MRI models. Kruskal-Wallis test was used to determine the across-scanner effect. Results The agreement in segmentation results for area, volume, and thickness measurements of all segmented anatomical labels was generally higher in Signa Excite and Verio models when compared with Sonata and TrioTim models. There were significant rank differences found across sessions in some labels of different measures. Smaller difference intervals in global volume measurements were noted on images acquired by Signa Excite and Verio models. For some brain regions, significant MRI model effects were observed on certain segmentation results. Conclusions Short-term scan-rescan reliability of automatic brain MRI morphometry is feasible in the clinical setting. However, since repeatability of software performance is contingent on the reproducibility of the scanner performance, the scanner performance must be calibrated before conducting such studies or before using such software for retrospective reviewing. PMID:26812647
Automatic liver segmentation from abdominal CT volumes using graph cuts and border marching.
Liao, Miao; Zhao, Yu-Qian; Liu, Xi-Yao; Zeng, Ye-Zhan; Zou, Bei-Ji; Wang, Xiao-Fang; Shih, Frank Y
2017-05-01
Identifying liver regions from abdominal computed tomography (CT) volumes is an important task for computer-aided liver disease diagnosis and surgical planning. This paper presents a fully automatic method for liver segmentation from CT volumes based on graph cuts and border marching. An initial slice is segmented by density peak clustering. Based on pixel- and patch-wise features, an intensity model and a PCA-based regional appearance model are developed to enhance the contrast between liver and background. Then, these models as well as the location constraint estimated iteratively are integrated into graph cuts in order to segment the liver in each slice automatically. Finally, a vessel compensation method based on the border marching is used to increase the segmentation accuracy. Experiments are conducted on a clinical data set we created and also on the MICCAI2007 Grand Challenge liver data. The results show that the proposed intensity, appearance models, and the location constraint are significantly effective for liver recognition, and the undersegmented vessels can be compensated by the border marching based method. The segmentation performances in terms of VOE, RVD, ASD, RMSD, and MSD as well as the average running time achieved by our method on the SLIVER07 public database are 5.8 ± 3.2%, -0.1 ± 4.1%, 1.0 ± 0.5mm, 2.0 ± 1.2mm, 21.2 ± 9.3mm, and 4.7 minutes, respectively, which are superior to those of existing methods. The proposed method does not require time-consuming training process and statistical model construction, and is capable of dealing with complicated shapes and intensity variations successfully. Copyright © 2017 Elsevier B.V. All rights reserved.
Lee, Hyuk; Lee, Sang Kil; Park, Jun Chul; Shin, Sung Kwan; Lee, Yong Chan
2013-01-01
There are heterogeneous subgroups among those with heartburn, and data on these individuals are relatively scant. We aimed to evaluate the effect of acid challenge on the segmental contractions of esophageal smooth muscle in endoscopy-negative patients with normal esophageal acid exposure. High-resolution esophageal manometry (HRM) was performed on 30 endoscopy-negative patients with heartburn accompanied by normal esophageal acid exposure using 10 water swallows followed by 10 acidic pomegranate juice swallows. Patients were classified into functional heartburn (FH) and hypersensitive esophagus (HE) groups based on the results of 24-hr impedance pH testing. HRM topographic plots were analyzed and maximal wave amplitude and pressure volumes were measured for proximal and distal smooth muscle segments. The pressure volume of the distal smooth muscle segment in the HE group measured during acidic swallows was higher than during water swallows (2224.1 ± 68.2 mmHg/cm per s versus 2105.6 ± 66.4 mmHg/cm per s, P = 0.027). A prominent shift in the pressure volume to the distal smooth muscle segment was observed in the HE group compared with the FH group (segmental ratio: 2.72 ± 0.08 versus 2.39 ± 0.07, P = 0.005). Manometric measurements during acidic swallows revealed that this shift was augmented in the HE group. The optimal ratio of pomegranate juice swallowing for discrimination of FH from HE was 2.82, with a sensitivity of 88.9% and a specificity of 100%. Hypercontractile response of distal smooth muscle segment to acid swallowing was more prominent in the HE group than the FH group. © 2012 Journal of Gastroenterology and Hepatology Foundation and Wiley Publishing Asia Pty Ltd.
3D prostate TRUS segmentation using globally optimized volume-preserving prior.
Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing
2014-01-01
An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.
Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2015-03-01
Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM
2013-01-01
Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866
Lopera, Jorge E; Katabathina, Venkata; Bosworth, Brian; Garg, Deepak; Kroma, Ghazwan; Garza-Berlanga, Andres; Suri, Rajeev; Wholey, Michael
2015-06-01
To determine the clinical significance and potential mechanisms of segmental liver ischemia and infarction following elective creation of a transjugular intrahepatic portosystemic shunt (TIPS). A retrospective review of 374 elective TIPS creations between March 2006 and September 2014 was performed, yielding 77 contrast-enhanced scans for review. Patients with imaging evidence of segmental perfusion defects were identified. Model for End-stage Liver Disease scores, liver volume, and percentage of liver ischemia/infarct were calculated. Clinical outcomes after TIPS creation were reviewed. Ten patients showed segmental liver ischemia/infarction on contrast-enhanced imaging after elective TIPS creation. Associated imaging findings included thrombosis of the posterior division (n = 7) and anterior division (n = 3) of the right portal vein (PV). The right hepatic vein was thrombosed in 5 patients, as was the middle hepatic vein in 3 and the left hepatic vein in 1. One patient had acute thrombosis of the shunt and main PV. Three patients developed acute liver failure: 2 died within 30 days and 1 required emergent liver transplantation. One patient died of acute renal failure 20 days after TIPS creation. A large infarct in a transplant recipient resulted in biloma formation. Five patients survived without additional interventions with follow-up times ranging from 3 months to 5 years. Segmental perfusion defects are not an uncommon imaging finding after elective TIPS creation. Segmental ischemia was associated with thrombosis of major branches of the PVs and often of the hepatic veins. Clinical outcomes varied significantly, from transient problems to acute liver failure with high mortality rates. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.
Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier
2017-07-15
In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.
1982-10-15
the two may interact. L1 *1 -35- REFERENCES [1] Arnold, Stephen J. (1979), "A Test for Clusters," Journal of Marketing Research , November, pp 545-551...of Marketing Research , August, pp 405-412. APPENDIX A RESULTS OF FACTOR ANALYSIS OF LIFE GOALS . . . . . -37- Ft-M AnSIS CF FEq. LIFE GOALS GM"L...Volume 5, Pre-intervention Recruiting Environ- ment, 1981. [9] Wind, Yoram (1978), "Issues and Advances in Segmentation Research," Journal of Marketing
Dense volumetric detection and segmentation of mediastinal lymph nodes in chest CT images
NASA Astrophysics Data System (ADS)
Oda, Hirohisa; Roth, Holger R.; Bhatia, Kanwal K.; Oda, Masahiro; Kitasaka, Takayuki; Iwano, Shingo; Homma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Schnabel, Julia A.; Mori, Kensaku
2018-02-01
We propose a novel mediastinal lymph node detection and segmentation method from chest CT volumes based on fully convolutional networks (FCNs). Most lymph node detection methods are based on filters for blob-like structures, which are not specific for lymph nodes. The 3D U-Net is a recent example of the state-of-the-art 3D FCNs. The 3D U-Net can be trained to learn appearances of lymph nodes in order to output lymph node likelihood maps on input CT volumes. However, it is prone to oversegmentation of each lymph node due to the strong data imbalance between lymph nodes and the remaining part of the CT volumes. To moderate the balance of sizes between the target classes, we train the 3D U-Net using not only lymph node annotations but also other anatomical structures (lungs, airways, aortic arches, and pulmonary arteries) that can be extracted robustly in an automated fashion. We applied the proposed method to 45 cases of contrast-enhanced chest CT volumes. Experimental results showed that 95.5% of lymph nodes were detected with 16.3 false positives per CT volume. The segmentation results showed that the proposed method can prevent oversegmentation, achieving an average Dice score of 52.3 +/- 23.1%, compared to the baseline method with 49.2 +/- 23.8%, respectively.
Measurement of complex joint trajectories using slice-to-volume 2D/3D registration and cine MR
NASA Astrophysics Data System (ADS)
Bloch, C.; Figl, M.; Gendrin, C.; Weber, C.; Unger, E.; Aldrian, S.; Birkfellner, W.
2010-02-01
A method for studying the in vivo kinematics of complex joints is presented. It is based on automatic fusion of single slice cine MR images capturing the dynamics and a static MR volume. With the joint at rest the 3D scan is taken. In the data the anatomical compartments are identified and segmented resulting in a 3D volume of each individual part. In each of the cine MR images the joint parts are segmented and their pose and position are derived using a 2D/3D slice-to-volume registration to the volumes. The method is tested on the carpal joint because of its complexity and the small but complex motion of its compartments. For a first study a human cadaver hand was scanned and the method was evaluated with artificially generated slice images. Starting from random initial positions of about 5 mm translational and 12° rotational deviation, 70 to 90 % of the registrations converged successfully to a deviation better than 0.5 mm and 5°. First evaluations using real data from a cine MR were promising. The feasibility of the method was demonstrated. However we experienced difficulties with the segmentation of the cine MR images. We therefore plan to examine different parameters for the image acquisition in future studies.
Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G
2017-11-01
Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV 1 -MTV 2 , MTV 1 -GBSV, MTV 2 -GBSV; gray-levels: 64-32, 64-128, and 64-256; reconstruction algorithms: OSEM-FORE-OSEM, OSEM-FOREFBP, and OSEM-3DRP). We used |d¯| as a measure of radiomic feature reproducibility level, where any feature scored |d¯| ±SD ≤ |25|% ± 35% was considered reproducible. We used Bland-Altman analysis to evaluate the mean, standard deviation (SD), and upper/lower reproducibility limits (U/LRL) for radiomic features in response to variation in each testing parameter. Furthermore, we proposed U/LRL as a method to classify the level of reproducibility: High- ±1% ≤ U/LRL ≤ ±30%; Intermediate- ±30% < U/LRL ≤ ±45%; Low- ±45 < U/LRL ≤ ±50%. We considered any feature below the low level as nonreproducible (NR). Finally, we calculated the interclass correlation coefficient (ICC) to evaluate the reliability of radiomic feature measurements for each parameter. The segmented volumes of 65 patients (81.3%) scored Dice coefficient >0.75 for all three volumes. The result outcomes revealed a tendency of higher radiomic feature reproducibility among segmentation pair MTV 1 -GBSV than MTV 2 -GBSV, gray-level pairs of 64-32 and 64-128 than 64-256, and reconstruction algorithm pairs of OSEM-FOREIR and OSEM-FOREFBP than OSEM-3DRP. Although the choice of cervical tumor segmentation method, gray-level value, and reconstruction algorithm may affect radiomic features, some features were characterized by high reproducibility through all testing parameters. The number of radiomic features that showed insensitivity to variations in segmentation methods, gray-level discretization, and reconstruction algorithms was 10 (13%), 4 (5%), and 1 (1%), respectively. These results suggest that a careful analysis of the effects of these parameters is essential prior to any radiomics clinical application. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Classification of microscopy images of Langerhans islets
NASA Astrophysics Data System (ADS)
Å vihlík, Jan; Kybic, Jan; Habart, David; Berková, Zuzana; Girman, Peter; Kříž, Jan; Zacharovová, Klára
2014-03-01
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, P., E-mail: purushd@barc.gov.in; Sharma, V. K.; Mitra, S.
Synthetic hydroxyapatite (HAp) is an important material in biomedical engineering due to its excellent biocompatibility and bioactivity. Here we report dynamics of cetyltrimethylammonium bromide (CTAB) in HAp composite, prepared by co-precipitation method, as studied by quasielastic neutron scattering (QENS) technique. It is found that the observed dynamics involved two time scales associated with fast torsional motion and segmental motion of the CTAB monomers. In addition to segmental motion of the hydrogen atoms, few undergo torsional motion as well. Torsional dynamics was described by a 2-fold jump diffusion model. The segmental dynamics of CTAB has been described assumimg the hydrogen atomsmore » undergoing diffusion inside a sphere of confined volume. While the diffusivity is found to increase with temperature, the spherical volumes within which the hydrogen atoms are undergoing diffusion remain almost unchanged.« less
Model Uncertainty and Test of a Segmented Mirror Telescope
2014-03-01
Optical Telescope project EOM: equation of motion FCA: fine control actuator FCD: Face-Centered Cubic Design FEA: finite element analysis FEM: finite...housed in a dark tent to isolate the telescope from stray light, air currents, or dust and other debris. However, the closed volume is prone to...is composed of six hexagonal segments that each have six coarse control actuators (CCA) for segment phasing control, three fine control actuators
3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head
NASA Astrophysics Data System (ADS)
Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan
2010-03-01
Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).
van 't Klooster, Ronald; de Koning, Patrick J H; Dehnavi, Reza Alizadeh; Tamsma, Jouke T; de Roos, Albert; Reiber, Johan H C; van der Geest, Rob J
2012-01-01
To develop and validate an automated segmentation technique for the detection of the lumen and outer wall boundaries in MR vessel wall studies of the common carotid artery. A new segmentation method was developed using a three-dimensional (3D) deformable vessel model requiring only one single user interaction by combining 3D MR angiography (MRA) and 2D vessel wall images. This vessel model is a 3D cylindrical Non-Uniform Rational B-Spline (NURBS) surface which can be deformed to fit the underlying image data. Image data of 45 subjects was used to validate the method by comparing manual and automatic segmentations. Vessel wall thickness and volume measurements obtained by both methods were compared. Substantial agreement was observed between manual and automatic segmentation; over 85% of the vessel wall contours were segmented successfully. The interclass correlation was 0.690 for the vessel wall thickness and 0.793 for the vessel wall volume. Compared with manual image analysis, the automated method demonstrated improved interobserver agreement and inter-scan reproducibility. Additionally, the proposed automated image analysis approach was substantially faster. This new automated method can reduce analysis time and enhance reproducibility of the quantification of vessel wall dimensions in clinical studies. Copyright © 2011 Wiley Periodicals, Inc.
Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis
2015-09-01
To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the median sensitivity and positive predictive value between them were 0.76 and 0.81, respectively. The authors developed an automated multimodality segmentation algorithm for tumor volume delineation and validated this algorithm for head and neck cancer radiotherapy. The multichannel segmented GTV agreed well with the physician-defined GTV. The authors expect that their algorithm will improve the accuracy and consistency in target definition for radiotherapy.
National Geocoding Converter File 1 : Volume 3. Montana to Wyoming.
DOT National Transportation Integrated Search
1974-01-01
This file contains a record for each county, county equivalent (as defined by the Census Bureau), SMSA county segment and SPLC county segment in the U.S. A record identifies for an area all major county codes and the associated county aggregate codes