Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Liu, Ting; Maurovich-Horvat, Pál; Mayrhofer, Thomas; Puchner, Stefan B; Lu, Michael T; Ghemigian, Khristine; Kitslaar, Pieter H; Broersen, Alexander; Pursnani, Amit; Hoffmann, Udo; Ferencik, Maros
2018-02-01
Semi-automated software can provide quantitative assessment of atherosclerotic plaques on coronary CT angiography (CTA). The relationship between established qualitative high-risk plaque features and quantitative plaque measurements has not been studied. We analyzed the association between quantitative plaque measurements and qualitative high-risk plaque features on coronary CTA. We included 260 patients with plaque who underwent coronary CTA in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT) II trial. Quantitative plaque assessment and qualitative plaque characterization were performed on a per coronary segment basis. Quantitative coronary plaque measurements included plaque volume, plaque burden, remodeling index, and diameter stenosis. In qualitative analysis, high-risk plaque was present if positive remodeling, low CT attenuation plaque, napkin-ring sign or spotty calcium were detected. Univariable and multivariable logistic regression analyses were performed to assess the association between quantitative and qualitative high-risk plaque assessment. Among 888 segments with coronary plaque, high-risk plaque was present in 391 (44.0%) segments by qualitative analysis. In quantitative analysis, segments with high-risk plaque had higher total plaque volume, low CT attenuation plaque volume, plaque burden and remodeling index. Quantitatively assessed low CT attenuation plaque volume (odds ratio 1.12 per 1 mm 3 , 95% CI 1.04-1.21), positive remodeling (odds ratio 1.25 per 0.1, 95% CI 1.10-1.41) and plaque burden (odds ratio 1.53 per 0.1, 95% CI 1.08-2.16) were associated with high-risk plaque. Quantitative coronary plaque characteristics (low CT attenuation plaque volume, positive remodeling and plaque burden) measured by semi-automated software correlated with qualitative assessment of high-risk plaque features.
General Staining and Segmentation Procedures for High Content Imaging and Analysis.
Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S
2018-01-01
Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.
NASA Technical Reports Server (NTRS)
Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.
1999-01-01
Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2017-06-01
Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.
Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo
2015-12-01
Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
Microstructural Organization of Elastomeric Polyurethanes with Siloxane-Containing Soft Segments
NASA Astrophysics Data System (ADS)
Choi, Taeyi; Weklser, Jadwiga; Padsalgikar, Ajay; Runt, James
2011-03-01
In the present study, we investigate the microstructure of two series of segmented polyurethanes (PUs) containing siloxane-based soft segments and the same hard segments, the latter synthesized from diphenylmethane diisocyanate and butanediol. The first series is synthesized using a hydroxy-terminated polydimethylsiloxane macrodiol and varying hard segment contents. The second series are derived from an oligomeric diol containing both siloxane and aliphatic carbonate species. Hard domain morphologies were characterized using tapping mode atomic force microscopy and quantitative analysis of hard/soft segment demixing was conducted using small-angle X-ray scattering. The phase transitions of all materials were investigated using DSC and dynamic mechanical analysis, and hydrogen bonding by FTIR spectroscopy.
Process for structural geologic analysis of topography and point data
Eliason, Jay R.; Eliason, Valerie L. C.
1987-01-01
A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.
Mizumura, Sunao; Kumita, Shin-ichiro; Cho, Keiichi; Ishihara, Makiko; Nakajo, Hidenobu; Toba, Masahiro; Kumazaki, Tatsuo
2003-06-01
Through visual assessment by three-dimensional (3D) brain image analysis methods using stereotactic brain coordinates system, such as three-dimensional stereotactic surface projections and statistical parametric mapping, it is difficult to quantitatively assess anatomical information and the range of extent of an abnormal region. In this study, we devised a method to quantitatively assess local abnormal findings by segmenting a brain map according to anatomical structure. Through quantitative local abnormality assessment using this method, we studied the characteristics of distribution of reduced blood flow in cases with dementia of the Alzheimer type (DAT). Using twenty-five cases with DAT (mean age, 68.9 years old), all of whom were diagnosed as probable Alzheimer's disease based on NINCDS-ADRDA, we collected I-123 iodoamphetamine SPECT data. A 3D brain map using the 3D-SSP program was compared with the data of 20 cases in the control group, who age-matched the subject cases. To study local abnormalities on the 3D images, we divided the whole brain into 24 segments based on anatomical classification. We assessed the extent of an abnormal region in each segment (rate of the coordinates with a Z-value that exceeds the threshold value, in all coordinates within a segment), and severity (average Z-value of the coordinates with a Z-value that exceeds the threshold value). This method clarified orientation and expansion of reduced accumulation, through classifying stereotactic brain coordinates according to the anatomical structure. This method was considered useful for quantitatively grasping distribution abnormalities in the brain and changes in abnormality distribution.
Quantitative analysis of multiple sclerosis: a feasibility study
NASA Astrophysics Data System (ADS)
Li, Lihong; Li, Xiang; Wei, Xinzhou; Sturm, Deborah; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Multiple Sclerosis (MS) is an inflammatory and demyelinating disorder of the central nervous system with a presumed immune-mediated etiology. For treatment of MS, the measurements of white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) are often used in conjunction with clinical evaluation to provide a more objective measure of MS burden. In this paper, we apply a new unifying automatic mixture-based algorithm for segmentation of brain tissues to quantitatively analyze MS. The method takes into account the following effects that commonly appear in MR imaging: 1) The MR data is modeled as a stochastic process with an inherent inhomogeneity effect of smoothly varying intensity; 2) A new partial volume (PV) model is built in establishing the maximum a posterior (MAP) segmentation scheme; 3) Noise artifacts are minimized by a priori Markov random field (MRF) penalty indicating neighborhood correlation from tissue mixture. The volumes of brain tissues (WM, GM) and CSF are extracted from the mixture-based segmentation. Experimental results of feasibility studies on quantitative analysis of MS are presented.
Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.
Akkus, Zeynettin; Galimzianova, Alfiia; Hoogi, Assaf; Rubin, Daniel L; Erickson, Bradley J
2017-08-01
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
An interactive method based on the live wire for segmentation of the breast in mammography images.
Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu
2014-01-01
In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
Clinical significance of quantitative analysis of facial nerve enhancement on MRI in Bell's palsy.
Song, Mee Hyun; Kim, Jinna; Jeon, Ju Hyun; Cho, Chang Il; Yoo, Eun Hye; Lee, Won-Sang; Lee, Ho-Ki
2008-11-01
Quantitative analysis of the facial nerve on the lesion side as well as the normal side, which allowed for more accurate measurement of facial nerve enhancement in patients with facial palsy, showed statistically significant correlation with the initial severity of facial nerve inflammation, although little prognostic significance was shown. This study investigated the clinical significance of quantitative measurement of facial nerve enhancement in patients with Bell's palsy by analyzing the enhancement pattern and correlating MRI findings with initial severity of facial palsy and clinical outcome. Facial nerve enhancement was measured quantitatively by using the region of interest on pre- and postcontrast T1-weighted images in 44 patients diagnosed with Bell's palsy. The signal intensity increase on the lesion side was first compared with that of the contralateral side and then correlated with the initial degree of facial palsy and prognosis. The lesion side showed significantly higher signal intensity increase compared with the normal side in all of the segments except for the mastoid segment. Signal intensity increase at the internal auditory canal and labyrinthine segments showed correlation with the initial degree of facial palsy but no significant difference was found between different prognostic groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J; Nishikawa, R; Reiser, I
Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benignmore » or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification performance. The best segmentation Result does not necessarily lead to the best classification Result. This work has been supported in part by grants from the NIH R21-EB015053. R Nishikawa is receives royalties form Hologic, Inc.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, Adriana L.; Varga, Tamas
Branching structures such as lungs, blood vessels and plant roots play a critical role in life. Growth, structure, and function of these branching structures have an immense effect on our lives. Therefore, quantitative size information on such structures in their native environment is invaluable for studying their growth and the effect of the environment on them. X-ray computed tomography (XCT) has been an effective tool for in situ imaging and analysis of branching structures. We developed a costless tool that approximates the surface and volume of branching structures. Our methodology of noninvasive imaging, segmentation and extraction of quantitative information ismore » demonstrated through the analysis of a plant root in its soil medium from 3D tomography data. XCT data collected on a grass specimen was used to visualize its root structure. A suite of open-source software was employed to segment the root from the soil and determine its isosurface, which was used to calculate its volume and surface. This methodology of processing 3D data is applicable to other branching structures even when the structure of interest is of similar x-ray attenuation to its environment and difficulties arise with sample segmentation.« less
Wang, Ying; Chen, Yajuan; Ding, Liping; Zhang, Jiewei; Wei, Jianhua; Wang, Hongzhi
2016-01-01
The vertical segments of Populus stems are an ideal experimental system for analyzing the gene expression patterns involved in primary and secondary growth during wood formation. Suitable internal control genes are indispensable to quantitative real time PCR (qRT-PCR) assays of gene expression. In this study, the expression stability of eight candidate reference genes was evaluated in a series of vertical stem segments of Populus tomentosa. Analysis through software packages geNorm, NormFinder and BestKeeper showed that genes ribosomal protein (RP) and tubulin beta (TUBB) were the most unstable across the developmental stages of P. tomentosa stems, and the combination of the three reference genes, eukaryotic translation initiation factor 5A (eIF5A), Actin (ACT6) and elongation factor 1-beta (EF1-beta) can provide accurate and reliable normalization of qRT-PCR analysis for target gene expression in stem segments undergoing primary and secondary growth in P. tomentosa. These results provide crucial information for transcriptional analysis in the P. tomentosa stem, which may help to improve the quality of gene expression data in these vertical stem segments, which constitute an excellent plant system for the study of wood formation.
NASA Astrophysics Data System (ADS)
Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang
2012-01-01
The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille C.; Moxley, Katherine M.; Moore, Kathleen; Mannel, Robert S.; Cheng, Samuel; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-02-01
Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.
Boucheron, Laura E
2013-07-16
Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.
Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.
Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C
2013-06-01
A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.
Parkinson, Craig; Foley, Kieran; Whybra, Philip; Hills, Robert; Roberts, Ashley; Marshall, Chris; Staffurth, John; Spezi, Emiliano
2018-04-11
Prognosis in oesophageal cancer (OC) is poor. The 5-year overall survival (OS) rate is approximately 15%. Personalised medicine is hoped to increase the 5- and 10-year OS rates. Quantitative analysis of PET is gaining substantial interest in prognostic research but requires the accurate definition of the metabolic tumour volume. This study compares prognostic models developed in the same patient cohort using individual PET segmentation algorithms and assesses the impact on patient risk stratification. Consecutive patients (n = 427) with biopsy-proven OC were included in final analysis. All patients were staged with PET/CT between September 2010 and July 2016. Nine automatic PET segmentation methods were studied. All tumour contours were subjectively analysed for accuracy, and segmentation methods with < 90% accuracy were excluded. Standardised image features were calculated, and a series of prognostic models were developed using identical clinical data. The proportion of patients changing risk classification group were calculated. Out of nine PET segmentation methods studied, clustering means (KM2), general clustering means (GCM3), adaptive thresholding (AT) and watershed thresholding (WT) methods were included for analysis. Known clinical prognostic factors (age, treatment and staging) were significant in all of the developed prognostic models. AT and KM2 segmentation methods developed identical prognostic models. Patient risk stratification was dependent on the segmentation method used to develop the prognostic model with up to 73 patients (17.1%) changing risk stratification group. Prognostic models incorporating quantitative image features are dependent on the method used to delineate the primary tumour. This has a subsequent effect on risk stratification, with patients changing groups depending on the image segmentation method used.
SpArcFiRe: Scalable automated detection of spiral galaxy arm segments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Darren R.; Hayes, Wayne B., E-mail: drdavis@uci.edu, E-mail: whayes@uci.edu
Given an approximately centered image of a spiral galaxy, we describe an entirely automated method that finds, centers, and sizes the galaxy (possibly masking nearby stars and other objects if necessary in order to isolate the galaxy itself) and then automatically extracts structural information about the spiral arms. For each arm segment found, we list the pixels in that segment, allowing image analysis on a per-arm-segment basis. We also perform a least-squares fit of a logarithmic spiral arc to the pixels in that segment, giving per-arc parameters, such as the pitch angle, arm segment length, location, etc. The algorithm takesmore » about one minute per galaxies, and can easily be scaled using parallelism. We have run it on all ∼644,000 Sloan objects that are larger than 40 pixels across and classified as 'galaxies'. We find a very good correlation between our quantitative description of a spiral structure and the qualitative description provided by Galaxy Zoo humans. Our objective, quantitative measures of structure demonstrate the difficulty in defining exactly what constitutes a spiral 'arm', leading us to prefer the term 'arm segment'. We find that pitch angle often varies significantly segment-to-segment in a single spiral galaxy, making it difficult to define the pitch angle for a single galaxy. We demonstrate how our new database of arm segments can be queried to find galaxies satisfying specific quantitative visual criteria. For example, even though our code does not explicitly find rings, a good surrogate is to look for galaxies having one long, low-pitch-angle arm—which is how our code views ring galaxies. SpArcFiRe is available at http://sparcfire.ics.uci.edu.« less
Ha, Richard; Mema, Eralda; Guo, Xiaotao; Mango, Victoria; Desperito, Elise; Ha, Jason; Wynn, Ralph; Zhao, Binsheng
2016-04-01
The amount of fibroglandular tissue (FGT) has been linked to breast cancer risk based on mammographic density studies. Currently, the qualitative assessment of FGT on mammogram (MG) and magnetic resonance imaging (MRI) is prone to intra and inter-observer variability. The purpose of this study is to develop an objective quantitative FGT measurement tool for breast MRI that could provide significant clinical value. An IRB approved study was performed. Sixty breast MRI cases with qualitative assessment of mammographic breast density and MRI FGT were randomly selected for quantitative analysis from routine breast MRIs performed at our institution from 1/2013 to 12/2014. Blinded to the qualitative data, whole breast and FGT contours were delineated on T1-weighted pre contrast sagittal images using an in-house, proprietary segmentation algorithm which combines the region-based active contours and a level set approach. FGT (%) was calculated by: [segmented volume of FGT (mm(3))/(segmented volume of whole breast (mm(3))] ×100. Statistical correlation analysis was performed between quantified FGT (%) on MRI and qualitative assessments of mammographic breast density and MRI FGT. There was a significant positive correlation between quantitative MRI FGT assessment and qualitative MRI FGT (r=0.809, n=60, P<0.001) and mammographic density assessment (r=0.805, n=60, P<0.001). There was a significant correlation between qualitative MRI FGT assessment and mammographic density assessment (r=0.725, n=60, P<0.001). The four qualitative assessment categories of FGT correlated with the calculated mean quantitative FGT (%) of 4.61% (95% CI, 0-12.3%), 8.74% (7.3-10.2%), 18.1% (15.1-21.1%), 37.4% (29.5-45.3%). Quantitative measures of FGT (%) were computed with data derived from breast MRI and correlated significantly with conventional qualitative assessments. This quantitative technique may prove to be a valuable tool in clinical use by providing computer generated standardized measurements with limited intra or inter-observer variability.
Rodenacker, K; Aubele, M; Hutzler, P; Adiga, P S
1997-01-01
In molecular pathology numerical chromosome aberrations have been found to be decisive for the prognosis of malignancy in tumours. The existence of such aberrations can be detected by interphase fluorescence in situ hybridization (FISH). The gain or loss of certain base sequences in the desoxyribonucleic acid (DNA) can be estimated by counting the number of FISH signals per cell nucleus. The quantitative evaluation of such events is a necessary condition for a prospective use in diagnostic pathology. To avoid occlusions of signals, the cell nucleus has to be analyzed in three dimensions. Confocal laser scanning microscopy is the means to obtain series of optical thin sections from fluorescence stained or marked material to fulfill the conditions mentioned above. A graphical user interface (GUI) to a software package for display, inspection, count and (semi-)automatic analysis of 3-D images for pathologists is outlined including the underlying methods of 3-D image interaction and segmentation developed. The preparative methods are briefly described. Main emphasis is given to the methodical questions of computer-aided analysis of large 3-D image data sets for pathologists. Several automated analysis steps can be performed for segmentation and succeeding quantification. However tumour material is in contrast to isolated or cultured cells even for visual inspection, a difficult material. For the present a fully automated digital image analysis of 3-D data is not in sight. A semi-automatic segmentation method is thus presented here.
van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna
2012-03-01
Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.
Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.
1991-01-01
Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.
Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P
2003-01-01
Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.
Clinical application of a light-pen computer system for quantitative angiography
NASA Technical Reports Server (NTRS)
Alderman, E. L.
1975-01-01
The paper describes an angiographic analysis system which uses a video disk for recording and playback, a light-pen for data input, minicomputer processing, and an electrostatic printer/plotter for hardcopy output. The method is applied to quantitative analysis of ventricular volumes, sequential ventriculography for assessment of physiologic and pharmacologic interventions, analysis of instantaneous time sequence of ventricular systolic and diastolic events, and quantitation of segmental abnormalities. The system is shown to provide the capability for computation of ventricular volumes and other measurements from operator-defined margins by greatly reducing the tedium and errors associated with manual planimetry.
NASA Astrophysics Data System (ADS)
Varga, T.; McKinney, A. L.; Bingham, E.; Handakumbura, P. P.; Jansson, C.
2017-12-01
Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as in processes with important implications to farming and thus human food supply. X-ray computed tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. Selected Brachypodium distachyon phenotypes were grown in both natural and artificial soil mixes. The specimens were imaged by XCT, and the root architectures were extracted from the data using three different software-based methods; RooTrak, ImageJ-based WEKA segmentation, and the segmentation feature in VG Studio MAX. The 3D root image was successfully segmented at 30 µm resolution by all three methods. In this presentation, ease of segmentation and the accuracy of the extracted quantitative information (root volume and surface area) will be compared between soil types and segmentation methods. The best route to easy and accurate segmentation and root analysis will be highlighted.
Klapsing, Philipp; Herrmann, Peter; Quintel, Michael; Moerer, Onnen
2017-12-01
Quantitative lung computed tomographic (CT) analysis yields objective data regarding lung aeration but is currently not used in clinical routine primarily because of the labor-intensive process of manual CT segmentation. Automatic lung segmentation could help to shorten processing times significantly. In this study, we assessed bias and precision of lung CT analysis using automatic segmentation compared with manual segmentation. In this monocentric clinical study, 10 mechanically ventilated patients with mild to moderate acute respiratory distress syndrome were included who had received lung CT scans at 5- and 45-mbar airway pressure during a prior study. Lung segmentations were performed both automatically using a computerized algorithm and manually. Automatic segmentation yielded similar lung volumes compared with manual segmentation with clinically minor differences both at 5 and 45 mbar. At 5 mbar, results were as follows: overdistended lung 49.58mL (manual, SD 77.37mL) and 50.41mL (automatic, SD 77.3mL), P=.028; normally aerated lung 2142.17mL (manual, SD 1131.48mL) and 2156.68mL (automatic, SD 1134.53mL), P = .1038; and poorly aerated lung 631.68mL (manual, SD 196.76mL) and 646.32mL (automatic, SD 169.63mL), P = .3794. At 45 mbar, values were as follows: overdistended lung 612.85mL (manual, SD 449.55mL) and 615.49mL (automatic, SD 451.03mL), P=.078; normally aerated lung 3890.12mL (manual, SD 1134.14mL) and 3907.65mL (automatic, SD 1133.62mL), P = .027; and poorly aerated lung 413.35mL (manual, SD 57.66mL) and 469.58mL (automatic, SD 70.14mL), P=.007. Bland-Altman analyses revealed the following mean biases and limits of agreement at 5 mbar for automatic vs manual segmentation: overdistended lung +0.848mL (±2.062mL), normally aerated +14.51mL (±49.71mL), and poorly aerated +14.64mL (±98.16mL). At 45 mbar, results were as follows: overdistended +2.639mL (±8.231mL), normally aerated 17.53mL (±41.41mL), and poorly aerated 56.23mL (±100.67mL). Automatic single CT image and whole lung segmentation were faster than manual segmentation (0.17 vs 125.35seconds [P<.0001] and 10.46 vs 7739.45seconds [P<.0001]). Automatic lung CT segmentation allows fast analysis of aerated lung regions. A reduction of processing times by more than 99% allows the use of quantitative CT at the bedside. Copyright © 2016 Elsevier Inc. All rights reserved.
SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.
Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A
2016-11-01
Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.
Fuzzy pulmonary vessel segmentation in contrast enhanced CT data
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til
2008-03-01
Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.
Segmentation and Quantitative Analysis of Epithelial Tissues.
Aigouy, Benoit; Umetsu, Daiki; Eaton, Suzanne
2016-01-01
Epithelia are tissues that regulate exchanges with the environment. They are very dynamic and can acquire virtually any shape; at the cellular level, they are composed of cells tightly connected by junctions. Most often epithelia are amenable to live imaging; however, the large number of cells composing an epithelium and the absence of informatics tools dedicated to epithelial analysis largely prevented tissue scale studies. Here we present Tissue Analyzer, a free tool that can be used to segment and analyze epithelial cells and monitor tissue dynamics.
Takabatake, Reona; Koiwa, Tomohiro; Kasahara, Masaki; Takashima, Kaori; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Oguchi, Taichi; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi
2011-01-01
To reduce the cost and time required to routinely perform the genetically modified organism (GMO) test, we developed a duplex quantitative real-time PCR method for a screening analysis simultaneously targeting an event-specific segment for GA21 and Cauliflower Mosaic Virus 35S promoter (P35S) segment [Oguchi et al., J. Food Hyg. Soc. Japan, 50, 117-125 (2009)]. To confirm the validity of the method, an interlaboratory collaborative study was conducted. In the collaborative study, conversion factors (Cfs), which are required to calculate the GMO amount (%), were first determined for two real-time PCR instruments, the ABI PRISM 7900HT and the ABI PRISM 7500. A blind test was then conducted. The limit of quantitation for both GA21 and P35S was estimated to be 0.5% or less. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSD(R)). The determined bias and RSD(R) were each less than 25%. We believe the developed method would be useful for the practical screening analysis of GM maize.
Quantitative analysis of regional myocardial performance in coronary artery disease
NASA Technical Reports Server (NTRS)
Stewart, D. K.; Dodge, H. T.; Frimer, M.
1975-01-01
Findings from a group of subjects with significant coronary artery stenosis are given. A group of controls determined by use of a quantitative method for the study of regional myocardial performance based on the frame-by-frame analysis of biplane left ventricular angiograms are presented. Particular emphasis was placed upon the analysis of wall motion in terms of normalized segment dimensions, timing and velocity of contraction. The results were compared with the method of subjective assessment used clinically.
Multi-object segmentation framework using deformable models for medical imaging analysis.
Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel
2016-08-01
Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Alexander, Nathan S; Palczewska, Grazyna; Palczewski, Krzysztof
2015-08-01
Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE.
Predicting Future Morphological Changes of Lesions from Radiotracer Uptake in 18F-FDG-PET Images
Bagci, Ulas; Yao, Jianhua; Miller-Jaster, Kirsten; Chen, Xinjian; Mollura, Daniel J.
2013-01-01
We introduce a novel computational framework to enable automated identification of texture and shape features of lesions on 18F-FDG-PET images through a graph-based image segmentation method. The proposed framework predicts future morphological changes of lesions with high accuracy. The presented methodology has several benefits over conventional qualitative and semi-quantitative methods, due to its fully quantitative nature and high accuracy in each step of (i) detection, (ii) segmentation, and (iii) feature extraction. To evaluate our proposed computational framework, thirty patients received 2 18F-FDG-PET scans (60 scans total), at two different time points. Metastatic papillary renal cell carcinoma, cerebellar hemongioblastoma, non-small cell lung cancer, neurofibroma, lymphomatoid granulomatosis, lung neoplasm, neuroendocrine tumor, soft tissue thoracic mass, nonnecrotizing granulomatous inflammation, renal cell carcinoma with papillary and cystic features, diffuse large B-cell lymphoma, metastatic alveolar soft part sarcoma, and small cell lung cancer were included in this analysis. The radiotracer accumulation in patients' scans was automatically detected and segmented by the proposed segmentation algorithm. Delineated regions were used to extract shape and textural features, with the proposed adaptive feature extraction framework, as well as standardized uptake values (SUV) of uptake regions, to conduct a broad quantitative analysis. Evaluation of segmentation results indicates that our proposed segmentation algorithm has a mean dice similarity coefficient of 85.75±1.75%. We found that 28 of 68 extracted imaging features were correlated well with SUVmax (p<0.05), and some of the textural features (such as entropy and maximum probability) were superior in predicting morphological changes of radiotracer uptake regions longitudinally, compared to single intensity feature such as SUVmax. We also found that integrating textural features with SUV measurements significantly improves the prediction accuracy of morphological changes (Spearman correlation coefficient = 0.8715, p<2e-16). PMID:23431398
A clustering approach to segmenting users of internet-based risk calculators.
Harle, C A; Downs, J S; Padman, R
2011-01-01
Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.
NASA Astrophysics Data System (ADS)
Shim, Hackjoon; Kwoh, C. Kent; Yun, Il Dong; Lee, Sang Uk; Bae, Kyongtae
2009-02-01
Osteoarthritis (OA) is associated with degradation of cartilage and related changes in the underlying bone. Quantitative measurement of those changes from MR images is an important biomarker to study the progression of OA and it requires a reliable segmentation of knee bone and cartilage. As the most popular method, manual segmentation of knee joint structures by boundary delineation is highly laborious and subject to user-variation. To overcome these difficulties, we have developed a semi-automated method for segmentation of knee bones, which consisted of two steps: placement of seeds and computation of segmentation. In the first step, seeds were placed by the user on a number of slices and then were propagated automatically to neighboring images. The seed placement could be performed on any of sagittal, coronal, and axial planes. The second step, computation of segmentation, was based on a graph-cuts algorithm where the optimal segmentation is the one that minimizes a cost function, which integrated the seeds specified by the user and both the regional and boundary properties of the regions to be segmented. The algorithm also allows simultaneous segmentation of three compartments of the knee bone (femur, tibia, patella). Our method was tested on the knee MR images of six subjects from the osteoarthritis initiative (OAI). The segmentation processing time (mean+/-SD) was (22+/-4)min, which is much shorter than that by the manual boundary delineation method (typically several hours). With this improved efficiency, our segmentation method will facilitate the quantitative morphologic analysis of changes in knee bones associated with osteoarthritis.
Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model
NASA Astrophysics Data System (ADS)
Lee, Myungeun; Kim, Jong Hyo
2012-02-01
Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.
Segmentation and detection of fluorescent 3D spots.
Ram, Sundaresh; Rodríguez, Jeffrey J; Bosco, Giovanni
2012-03-01
The 3D spatial organization of genes and other genetic elements within the nucleus is important for regulating gene expression. Understanding how this spatial organization is established and maintained throughout the life of a cell is key to elucidating the many layers of gene regulation. Quantitative methods for studying nuclear organization will lead to insights into the molecular mechanisms that maintain gene organization as well as serve as diagnostic tools for pathologies caused by loss of nuclear structure. However, biologists currently lack automated and high throughput methods for quantitative and qualitative global analysis of 3D gene organization. In this study, we use confocal microscopy and fluorescence in-situ hybridization (FISH) as a cytogenetic technique to detect and localize the presence of specific DNA sequences in 3D. FISH uses probes that bind to specific targeted locations on the chromosomes, appearing as fluorescent spots in 3D images obtained using fluorescence microscopy. In this article, we propose an automated algorithm for segmentation and detection of 3D FISH spots. The algorithm is divided into two stages: spot segmentation and spot detection. Spot segmentation consists of 3D anisotropic smoothing to reduce the effect of noise, top-hat filtering, and intensity thresholding, followed by 3D region-growing. Spot detection uses a Bayesian classifier with spot features such as volume, average intensity, texture, and contrast to detect and classify the segmented spots as either true or false spots. Quantitative assessment of the proposed algorithm demonstrates improved segmentation and detection accuracy compared to other techniques. Copyright © 2012 International Society for Advancement of Cytometry.
Nanthagopal, A Padma; Rajamony, R Sukanesh
2012-07-01
The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
A two-stage method for microcalcification cluster segmentation in mammography by deformable models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.
Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods aremore » applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross-validation methodology. A previously developed B-spline active rays segmentation method was also considered for comparison purposes. Results: Interobserver and intraobserver segmentation agreements (median and [25%, 75%] quartile range) were substantial with respect to the distance metrics HDIST{sub cluster} (2.3 [1.8, 2.9] and 2.5 [2.1, 3.2] pixels) and AMINDIST{sub cluster} (0.8 [0.6, 1.0] and 1.0 [0.8, 1.2] pixels), while moderate with respect to AOM{sub cluster} (0.64 [0.55, 0.71] and 0.59 [0.52, 0.66]). The proposed segmentation method outperformed (0.80 ± 0.04) statistically significantly (Mann-Whitney U-test, p < 0.05) the B-spline active rays segmentation method (0.69 ± 0.04), suggesting the significance of the proposed semiautomated method. Conclusions: Results indicate a reliable semiautomated segmentation method for MC clusters offered by deformable models, which could be utilized in MC cluster quantitative image analysis.« less
A quantitative study of nanoparticle skin penetration with interactive segmentation.
Lee, Onseok; Lee, See Hyun; Jeong, Sang Hoon; Kim, Jaeyoung; Ryu, Hwa Jung; Oh, Chilhwan; Son, Sang Wook
2016-10-01
In the last decade, the application of nanotechnology techniques has expanded within diverse areas such as pharmacology, medicine, and optical science. Despite such wide-ranging possibilities for implementation into practice, the mechanisms behind nanoparticle skin absorption remain unknown. Moreover, the main mode of investigation has been qualitative analysis. Using interactive segmentation, this study suggests a method of objectively and quantitatively analyzing the mechanisms underlying the skin absorption of nanoparticles. Silica nanoparticles (SNPs) were assessed using transmission electron microscopy and applied to the human skin equivalent model. Captured fluorescence images of this model were used to evaluate degrees of skin penetration. These images underwent interactive segmentation and image processing in addition to statistical quantitative analyses of calculated image parameters including the mean, integrated density, skewness, kurtosis, and area fraction. In images from both groups, the distribution area and intensity of fluorescent silica gradually increased in proportion to time. Since statistical significance was achieved after 2 days in the negative charge group and after 4 days in the positive charge group, there is a periodic difference. Furthermore, the quantity of silica per unit area showed a dramatic change after 6 days in the negative charge group. Although this quantitative result is identical to results obtained by qualitative assessment, it is meaningful in that it was proven by statistical analysis with quantitation by using image processing. The present study suggests that the surface charge of SNPs could play an important role in the percutaneous absorption of NPs. These findings can help achieve a better understanding of the percutaneous transport of NPs. In addition, these results provide important guidance for the design of NPs for biomedical applications.
Quantification of EEG reactivity in comatose patients
Hermans, Mathilde C.; Westover, M. Brandon; van Putten, Michel J.A.M.; Hirsch, Lawrence J.; Gaspard, Nicolas
2016-01-01
Objective EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. Methods In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. Results The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet’s AC1: 65–70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts’ agreement regarding reactivity for each individual case. Conclusion Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Significance Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. PMID:26183757
Lim, Issel Anne L; Faria, Andreia V; Li, Xu; Hsu, Johnny T C; Airan, Raag D; Mori, Susumu; van Zijl, Peter C M
2013-11-15
The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a "deep gray matter parcellation map" (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established "white matter parcellation map" (WMPM) from the same subject's T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the "Everything Parcellation Map in Eve Space," also known as the "EvePM." It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting "almost perfect" agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. Copyright © 2013 Elsevier Inc. All rights reserved.
Lim, Issel Anne L.; Faria, Andreia V.; Li, Xu; Hsu, Johnny T.C.; Airan, Raag D.; Mori, Susumu; van Zijl, Peter C. M.
2013-01-01
The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a “deep gray matter parcellation map” (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established “white matter parcellation map” (WMPM) from the same subject’s T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the “Everything Parcellation Map in Eve Space,” also known as the “EvePM.” It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting “almost perfect” agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. PMID:23769915
An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.
Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong
2014-08-01
Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.
Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.
Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J
2012-09-01
Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.
A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury
NASA Astrophysics Data System (ADS)
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2012-02-01
Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.
Debuc, Delia Cabrera; Salinas, Harry M; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M; Puliafito, Carmen A
2010-01-01
We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 microm and 26.71 microm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 microm and 0.6 and 1.76 microm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R(2)>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.
NASA Astrophysics Data System (ADS)
Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.
2010-07-01
We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.
Song, Youyi; He, Liang; Zhou, Feng; Chen, Siping; Ni, Dong; Lei, Baiying; Wang, Tianfu
2017-07-01
Quantitative analysis of bacterial morphotypes in the microscope images plays a vital role in diagnosis of bacterial vaginosis (BV) based on the Nugent score criterion. However, there are two main challenges for this task: 1) It is quite difficult to identify the bacterial regions due to various appearance, faint boundaries, heterogeneous shapes, low contrast with the background, and small bacteria sizes with regards to the image. 2) There are numerous bacteria overlapping each other, which hinder us to conduct accurate analysis on individual bacterium. To overcome these challenges, we propose an automatic method in this paper to diagnose BV by quantitative analysis of bacterial morphotypes, which consists of a three-step approach, i.e., bacteria regions segmentation, overlapping bacteria splitting, and bacterial morphotypes classification. Specifically, we first segment the bacteria regions via saliency cut, which simultaneously evaluates the global contrast and spatial weighted coherence. And then Markov random field model is applied for high-quality unsupervised segmentation of small object. We then decompose overlapping bacteria clumps into markers, and associate a pixel with markers to identify evidence for eventual individual bacterium splitting. Next, we extract morphotype features from each bacterium to learn the descriptors and to characterize the types of bacteria using an Adaptive Boosting machine learning framework. Finally, BV diagnosis is implemented based on the Nugent score criterion. Experiments demonstrate that our proposed method achieves high accuracy and efficiency in computation for BV diagnosis.
Bartesaghi, Alberto; Sapiro, Guillermo; Subramaniam, Sriram
2006-01-01
Electron tomography allows for the determination of the three-dimensional structures of cells and tissues at resolutions significantly higher than that which is possible with optical microscopy. Electron tomograms contain, in principle, vast amounts of information on the locations and architectures of large numbers of subcellular assemblies and organelles. The development of reliable quantitative approaches for the analysis of features in tomograms is an important problem, and a challenging prospect due to the low signal-to-noise ratios that are inherent to biological electron microscopic images. This is, in part, a consequence of the tremendous complexity of biological specimens. We report on a new method for the automated segmentation of HIV particles and selected cellular compartments in electron tomograms recorded from fixed, plastic-embedded sections derived from HIV-infected human macrophages. Individual features in the tomogram are segmented using a novel robust algorithm that finds their boundaries as global minimal surfaces in a metric space defined by image features. The optimization is carried out in a transformed spherical domain with the center an interior point of the particle of interest, providing a proper setting for the fast and accurate minimization of the segmentation energy. This method provides tools for the semi-automated detection and statistical evaluation of HIV particles at different stages of assembly in the cells and presents opportunities for correlation with biochemical markers of HIV infection. The segmentation algorithm developed here forms the basis of the automated analysis of electron tomograms and will be especially useful given the rapid increases in the rate of data acquisition. It could also enable studies of much larger data sets, such as those which might be obtained from the tomographic analysis of HIV-infected cells from studies of large populations. PMID:16190467
Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate
2010-05-01
Breast cancer, cell signaling, cell proliferation, histology, image analysis 15. NUMBER OF PAGES - 51 16. PRICE CODE 17. SECURITY CLASSIFICATION...revealed by individual stains in multiplex combinations; and (3) software (FARSIGHT) for automated multispectral image analysis that (i) segments...Task 3. Develop computational algorithms for multispectral immunohistological image analysis FARSIGHT software was developed to quantify intrinsic
Geraghty, John P; Grogan, Garry; Ebert, Martin A
2013-04-30
This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients ('benchmarking cases'), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 "RADAR" trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation algorithms based on image-registration as in iPlan, it is apparent that agreement between observer and automatic segmentation will be a function of patient-specific image characteristics, particularly for anatomy with poor contrast definition. For this reason, it is suggested that automatic registration based on transformation of a single reference dataset adds a significant systematic bias to the resulting volumes and their use in the context of a multicentre trial should be carefully considered.
Lin, Yi-Yang; Lee, Rheun-Chuan; Tseng, Hsiuo-Shan; Liu, Chien-An; Guo, Wan-Yuo; Chang, Cheng-Yen
2015-12-01
To quantitatively measure the hemodynamic change of hepatic artery before and after transcatheter arterial chemoembolization (TACE) of hepatocellular carcinoma (HCC) by quantitative color-coding analysis (QCA). This prospective study registered 64 consecutive HCC patients who underwent segmental or subsegmental TACE with epirubicin and lipiodol at level 2 or 3 of the subjective angiographic chemoembolization endpoint. QCA was used to determine the maximal density time (T(max)) of selected intravascular region of interest (ROI). Relative T(max) (rT(max)) was defined as the T(max) at the selected ROI minus the time of contrast medium spurting from the catheter tip. The rT(max) of hepatic arteries was analyzed before and after embolization. The pre- and post-treatment rT(max) of the landmarks at the treated segmental artery were 1.96 ± 0.48 and 3.14 ± 1.77 s, p < 0.001. According to the treated lobe, 30 patients were treated for the right lobe alone, and 8 patients were treated for the left lobe alone. The pre- and post-rT(max) of treated segmental artery were 2.06 ± 0.54, 3.34 ± 1.63 s, p < 0.001 and 1.89 ± 0.45, 2.68 ± 1.46 s, p = 0.12, respectively. The rT(max) of the proximal lobar hepatic arteries or proper hepatic artery had no significant change before and after TACE. The QCA is feasible to quantify embolization endpoints by comparing the rT(max) in selected hepatic arteries before and after TACE. The rT(max) of treated segmental artery was significant prolonged after optimized procedures.
Thermal image analysis using the serpentine method
NASA Astrophysics Data System (ADS)
Koprowski, Robert; Wilczyński, Sławomir
2018-03-01
Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.
Brown, H G; Shibata, N; Sasaki, H; Petersen, T C; Paganin, D M; Morgan, M J; Findlay, S D
2017-11-01
Electric field mapping using segmented detectors in the scanning transmission electron microscope has recently been achieved at the nanometre scale. However, converting these results to quantitative field measurements involves assumptions whose validity is unclear for thick specimens. We consider three approaches to quantitative reconstruction of the projected electric potential using segmented detectors: a segmented detector approximation to differential phase contrast and two variants on ptychographical reconstruction. Limitations to these approaches are also studied, particularly errors arising from detector segment size, inelastic scattering, and non-periodic boundary conditions. A simple calibration experiment is described which corrects the differential phase contrast reconstruction to give reliable quantitative results despite the finite detector segment size and the effects of plasmon scattering in thick specimens. A plasmon scattering correction to the segmented detector ptychography approaches is also given. Avoiding the imposition of periodic boundary conditions on the reconstructed projected electric potential leads to more realistic reconstructions. Copyright © 2017 Elsevier B.V. All rights reserved.
A Review on Segmentation of Positron Emission Tomography Images
Foster, Brent; Bagci, Ulas; Mansoor, Awais; Xu, Ziyue; Mollura, Daniel J.
2014-01-01
Positron Emission Tomography (PET), a non-invasive functional imaging method at the molecular level, images the distribution of biologically targeted radiotracers with high sensitivity. PET imaging provides detailed quantitative information about many diseases and is often used to evaluate inflammation, infection, and cancer by detecting emitted photons from a radiotracer localized to abnormal cells. In order to differentiate abnormal tissue from surrounding areas in PET images, image segmentation methods play a vital role; therefore, accurate image segmentation is often necessary for proper disease detection, diagnosis, treatment planning, and follow-ups. In this review paper, we present state-of-the-art PET image segmentation methods, as well as the recent advances in image segmentation techniques. In order to make this manuscript self-contained, we also briefly explain the fundamentals of PET imaging, the challenges of diagnostic PET image analysis, and the effects of these challenges on the segmentation results. PMID:24845019
Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
Implementation of an interactive liver surgery planning system
NASA Astrophysics Data System (ADS)
Wang, Luyao; Liu, Jingjing; Yuan, Rong; Gu, Shuguo; Yu, Long; Li, Zhitao; Li, Yanzhao; Li, Zhen; Xie, Qingguo; Hu, Daoyu
2011-03-01
Liver tumor, one of the most wide-spread diseases, has a very high mortality in China. To improve success rates of liver surgeries and life qualities of such patients, we implement an interactive liver surgery planning system based on contrastenhanced liver CT images. The system consists of five modules: pre-processing, segmentation, modeling, quantitative analysis and surgery simulation. The Graph Cuts method is utilized to automatically segment the liver based on an anatomical prior knowledge that liver is the biggest organ and has almost homogeneous gray value. The system supports users to build patient-specific liver segment and sub-segment models using interactive portal vein branch labeling, and to perform anatomical resection simulation. It also provides several tools to simulate atypical resection, including resection plane, sphere and curved surface. To match actual surgery resections well and simulate the process flexibly, we extend our work to develop a virtual scalpel model and simulate the scalpel movement in the hepatic tissue using multi-plane continuous resection. In addition, the quantitative analysis module makes it possible to assess the risk of a liver surgery. The preliminary results show that the system has the potential to offer an accurate 3D delineation of the liver anatomy, as well as the tumors' location in relation to vessels, and to facilitate liver resection surgeries. Furthermore, we are testing the system in a full-scale clinical trial.
Automated detection of videotaped neonatal seizures based on motion segmentation methods.
Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-07-01
This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.
Quantitative analyses for elucidating mechanisms of cell fate commitment in the mouse blastocyst
NASA Astrophysics Data System (ADS)
Saiz, Néstor; Kang, Minjung; Puliafito, Alberto; Schrode, Nadine; Xenopoulos, Panagiotis; Lou, Xinghua; Di Talia, Stefano; Hadjantonakis, Anna-Katerina
2015-03-01
In recent years we have witnessed a shift from qualitative image analysis towards higher resolution, quantitative analyses of imaging data in developmental biology. This shift has been fueled by technological advances in both imaging and analysis software. We have recently developed a tool for accurate, semi-automated nuclear segmentation of imaging data from early mouse embryos and embryonic stem cells. We have applied this software to the study of the first lineage decisions that take place during mouse development and established analysis pipelines for both static and time-lapse imaging experiments. In this paper we summarize the conclusions from these studies to illustrate how quantitative, single-cell level analysis of imaging data can unveil biological processes that cannot be revealed by traditional qualitative studies.
NASA Astrophysics Data System (ADS)
Pohl, L.; Kaiser, M.; Ketelhut, S.; Pereira, S.; Goycoolea, F.; Kemper, Björn
2016-03-01
Digital holographic microscopy (DHM) enables high resolution non-destructive inspection of technical surfaces and minimally-invasive label-free live cell imaging. However, the analysis of confluent cell layers represents a challenge as quantitative DHM phase images in this case do not provide sufficient information for image segmentation, determination of the cellular dry mass or calculation of the cell thickness. We present novel strategies for the analysis of confluent cell layers with quantitative DHM phase contrast utilizing a histogram based-evaluation procedure. The applicability of our approach is illustrated by quantification of drug induced cell morphology changes and it is shown that the method is capable to quantify reliable global morphology changes of confluent cell layers.
Jung, Chanho; Kim, Changick
2014-08-01
Automatic segmentation of cell nuclei clusters is a key building block in systems for quantitative analysis of microscopy cell images. For that reason, it has received a great attention over the last decade, and diverse automatic approaches to segment clustered nuclei with varying levels of performance under different test conditions have been proposed in literature. To the best of our knowledge, however, so far there is no comparative study on the methods. This study is a first attempt to fill this research gap. More precisely, the purpose of this study is to present an objective performance comparison of existing state-of-the-art segmentation methods. Particularly, the impact of their accuracy on classification of thyroid follicular lesions is also investigated "quantitatively" under the same experimental condition, to evaluate the applicability of the methods. Thirteen different segmentation approaches are compared in terms of not only errors in nuclei segmentation and delineation, but also their impact on the performance of system to classify thyroid follicular lesions using different metrics (e.g., diagnostic accuracy, sensitivity, specificity, etc.). Extensive experiments have been conducted on a total of 204 digitized thyroid biopsy specimens. Our study demonstrates that significant diagnostic errors can be avoided using more advanced segmentation approaches. We believe that this comprehensive comparative study serves as a reference point and guide for developers and practitioners in choosing an appropriate automatic segmentation technique adopted for building automated systems for specifically classifying follicular thyroid lesions. © 2014 International Society for Advancement of Cytometry.
Quantitative assessment of 12-lead ECG synthesis using CAVIAR.
Scherer, J A; Rubel, P; Fayn, J; Willems, J L
1992-01-01
The objective of this study is to assess the performance of patient-specific segment-specific (PSSS) synthesis in QRST complexes using CAVIAR, a new method of the serial comparison for electrocardiograms and vectorcardiograms. A collection of 250 multi-lead recordings from the Common Standards for Quantitative Electrocardiography (CSE) diagnostic pilot study is employed. QRS and ST-T segments are independently synthesized using the PSSS algorithm so that the mean-squared error between the original and estimated waveforms is minimized. CAVIAR compares the recorded and synthesized QRS and ST-T segments and calculates the mean-quadratic deviation as a measure of error. The results of this study indicate that estimated QRS complexes are good representatives of their recorded counterparts, and the integrity of the spatial information is maintained by the PSSS synthesis process. Analysis of the ST-T segments suggests that the deviations between recorded and synthesized waveforms are considerably greater than those associated with the QRS complexes. The poorer performance of the ST-T segments is attributed to magnitude normalization of the spatial loops, low-voltage passages, and noise interference. Using the mean-quadratic deviation and CAVIAR as methods of performance assessment, this study indicates that the PSSS-synthesis algorithm accurately maintains the signal information within the 12-lead electrocardiogram.
Quantitative mouse brain phenotyping based on single and multispectral MR protocols
Badea, Alexandra; Gewalt, Sally; Avants, Brian B.; Cook, James J.; Johnson, G. Allan
2013-01-01
Sophisticated image analysis methods have been developed for the human brain, but such tools still need to be adapted and optimized for quantitative small animal imaging. We propose a framework for quantitative anatomical phenotyping in mouse models of neurological and psychiatric conditions. The framework encompasses an atlas space, image acquisition protocols, and software tools to register images into this space. We show that a suite of segmentation tools (Avants, Epstein et al., 2008) designed for human neuroimaging can be incorporated into a pipeline for segmenting mouse brain images acquired with multispectral magnetic resonance imaging (MR) protocols. We present a flexible approach for segmenting such hyperimages, optimizing registration, and identifying optimal combinations of image channels for particular structures. Brain imaging with T1, T2* and T2 contrasts yielded accuracy in the range of 83% for hippocampus and caudate putamen (Hc and CPu), but only 54% in white matter tracts, and 44% for the ventricles. The addition of diffusion tensor parameter images improved accuracy for large gray matter structures (by >5%), white matter (10%), and ventricles (15%). The use of Markov random field segmentation further improved overall accuracy in the C57BL/6 strain by 6%; so Dice coefficients for Hc and CPu reached 93%, for white matter 79%, for ventricles 68%, and for substantia nigra 80%. We demonstrate the segmentation pipeline for the widely used C57BL/6 strain, and two test strains (BXD29, APP/TTA). This approach appears promising for characterizing temporal changes in mouse models of human neurological and psychiatric conditions, and may provide anatomical constraints for other preclinical imaging, e.g. fMRI and molecular imaging. This is the first demonstration that multiple MR imaging modalities combined with multivariate segmentation methods lead to significant improvements in anatomical segmentation in the mouse brain. PMID:22836174
A software tool for automatic classification and segmentation of 2D/3D medical images
NASA Astrophysics Data System (ADS)
Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur
2013-02-01
Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.
Quantification of EEG reactivity in comatose patients.
Hermans, Mathilde C; Westover, M Brandon; van Putten, Michel J A M; Hirsch, Lawrence J; Gaspard, Nicolas
2016-01-01
EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet's AC1: 65-70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts' agreement regarding reactivity for each individual case. Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Nucleus detection using gradient orientation information and linear least squares regression
NASA Astrophysics Data System (ADS)
Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.
2015-03-01
Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.
Automated detection of videotaped neonatal seizures of epileptic origin.
Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-06-01
This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.
Irwin, Gareth; Kerwin, David G; Williams, Genevieve; Van Emmerik, Richard E A; Newell, Karl M; Hamill, Joseph
2018-06-18
A case study visualisation approach to examining the coordination and variability of multiple interacting segments is presented using a whole-body gymnastic skill as the task example. One elite male gymnast performed 10 trials of 10 longswings whilst three-dimensional locations of joint centres were tracked using a motion analysis system. Segment angles were used to define coupling between the arms and trunk, trunk and thighs and thighs and shanks. Rectified continuous relative phase profiles for each interacting couple for 80 longswings were produced. Graphical representations of coordination couplings are presented that include the traditional single coupling, followed by the relational dynamics of two couplings and finally three couplings simultaneously plotted. This method highlights the power of visualisation of movement dynamics and identifies properties of the global interacting segmental couplings that a more formal analysis may not reveal. Visualisation precedes and informs the appropriate qualitative and quantitative analysis of the dynamics.
Consistent interactive segmentation of pulmonary ground glass nodules identified in CT studies
NASA Astrophysics Data System (ADS)
Zhang, Li; Fang, Ming; Naidich, David P.; Novak, Carol L.
2004-05-01
Ground glass nodules (GGNs) have proved especially problematic in lung cancer diagnosis, as despite frequently being malignant they characteristically have extremely slow rates of growth. This problem is further magnified by the small size of many of these lesions now being routinely detected following the introduction of multislice CT scanners capable of acquiring contiguous high resolution 1 to 1.25 mm sections throughout the thorax in a single breathhold period. Although segmentation of solid nodules can be used clinically to determine volume doubling times quantitatively, reliable methods for segmentation of pure ground glass nodules have yet to be introduced. Our purpose is to evaluate a newly developed computer-based segmentation method for rapid and reproducible measurements of pure ground glass nodules. 23 pure or mixed ground glass nodules were identified in a total of 8 patients by a radiologist and subsequently segmented by our computer-based method using Markov random field and shape analysis. The computer-based segmentation was initialized by a click point. Methodological consistency was assessed using the overlap ratio between 3 segmentations initialized by 3 different click points for each nodule. The 95% confidence interval on the mean of the overlap ratios proved to be [0.984, 0.998]. The computer-based method failed on two nodules that were difficult to segment even manually either due to especially low contrast or markedly irregular margins. While achieving consistent manual segmentation of ground glass nodules has proven problematic most often due to indistinct boundaries and interobserver variability, our proposed method introduces a powerful new tool for obtaining reproducible quantitative measurements of these lesions. It is our intention to further document the value of this approach with a still larger set of ground glass nodules.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
Feasibility of high-resolution quantitative perfusion analysis in patients with heart failure.
Sammut, Eva; Zarinabad, Niloufar; Wesolowski, Roman; Morton, Geraint; Chen, Zhong; Sohal, Manav; Carr-White, Gerry; Razavi, Reza; Chiribiri, Amedeo
2015-02-12
Cardiac magnetic resonance (CMR) is playing an expanding role in the assessment of patients with heart failure (HF). The assessment of myocardial perfusion status in HF can be challenging due to left ventricular (LV) remodelling and wall thinning, coexistent scar and respiratory artefacts. The aim of this study was to assess the feasibility of quantitative CMR myocardial perfusion analysis in patients with HF. A group of 58 patients with heart failure (HF; left ventricular ejection fraction, LVEF ≤ 50%) and 33 patients with normal LVEF (LVEF >50%), referred for suspected coronary artery disease, were studied. All subjects underwent quantitative first-pass stress perfusion imaging using adenosine according to standard acquisition protocols. The feasibility of quantitative perfusion analysis was then assessed using high-resolution, 3 T kt perfusion and voxel-wise Fermi deconvolution. 30/58 (52%) subjects in the HF group had underlying ischaemic aetiology. Perfusion abnormalities were seen amongst patients with ischaemic HF and patients with normal LV function. No regional perfusion defect was observed in the non-ischaemic HF group. Good agreement was found between visual and quantitative analysis across all groups. Absolute stress perfusion rate, myocardial perfusion reserve (MPR) and endocardial-epicardial MPR ratio identified areas with abnormal perfusion in the ischaemic HF group (p = 0.02; p = 0.04; p = 0.02, respectively). In the Normal LV group, MPR and endocardial-epicardial MPR ratio were able to distinguish between normal and abnormal segments (p = 0.04; p = 0.02 respectively). No significant differences of absolute stress perfusion rate or MPR were observed comparing visually normal segments amongst groups. Our results demonstrate the feasibility of high-resolution voxel-wise perfusion assessment in patients with HF.
Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie
2018-05-01
In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kiryu, Shigeru; Dodanuki, Keiichi; Takao, Hidemasa; Watanabe, Makoto; Inoue, Yusuke; Takazoe, Masakazu; Sahara, Rikisaburo; Unuma, Kiyohito; Ohtomo, Kuni
2009-04-01
To investigate the application of free-breathing diffusion-weighted MR imaging (DWI) to the assessment of disease activity in Crohn's disease. Thirty-one patients with Crohn's disease were investigated using free-breathing DWI without special patient preparation or IV or intraluminal contrast agent. The bowel was divided into seven segments, and disease activity was assessed visually on DWI. For quantitative analysis, the apparent diffusion coefficient (ADC) was measured in each segment. The findings of a conventional barium study or surgery were regarded as the gold standard for evaluating the diagnostic ability of DWI to assess disease activity. Upon visual assessment, the sensitivity, specificity, and accuracy for the detection of disease-active segments were 86.0, 81.4, and 82.4%, respectively. In the quantitative assessment, the ADC value in the disease-active area was lower than that in disease-inactive area in small and large bowels (1.61 +/- 0.44 x 10(-3) mm(2)/s versus 2.56 +/- 0.51 x 10(-3) mm(2)/s in small bowel and 1.52 +/- 0.43 x 10(-3) mm(2)/s versus 2.31 +/- 0.59 x 10(-3) mm(2)/s in large bowel, respectively, P<0.001). Free-breathing DWI is useful in the assessment of Crohn's disease. The accuracy of DWI is high in evaluating disease activity, especially in the small bowel, and the ADC may facilitate quantitative analysis of disease activity.
Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu
2009-03-01
The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, p<0.001). Although the predicted values corresponded with values predicted by simple calculation using a segment-counting method (r=0.98), there were two outliers whose pulmonary functional reserves were predicted more accurately by CT than by segment counting. The measured pulmonary functional reserves were significantly higher than the predicted values in patients with extensive emphysematous areas (<-910 Hounsfield units), but not in patients with chronic obstructive pulmonary disease. Quantitative CT yielded accurate prediction of functional reserve after lung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.
Padma, A; Sukanesh, R
2013-01-01
A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity.
Paproki, A; Engstrom, C; Chandra, S S; Neubert, A; Fripp, J; Crozier, S
2014-09-01
To validate an automatic scheme for the segmentation and quantitative analysis of the medial meniscus (MM) and lateral meniscus (LM) in magnetic resonance (MR) images of the knee. We analysed sagittal water-excited double-echo steady-state MR images of the knee from a subset of the Osteoarthritis Initiative (OAI) cohort. The MM and LM were automatically segmented in the MR images based on a deformable model approach. Quantitative parameters including volume, subluxation and tibial-coverage were automatically calculated for comparison (Wilcoxon tests) between knees with variable radiographic osteoarthritis (rOA), medial and lateral joint space narrowing (mJSN, lJSN) and pain. Automatic segmentations and estimated parameters were evaluated for accuracy using manual delineations of the menisci in 88 pathological knee MR examinations at baseline and 12 months time-points. The median (95% confidence-interval (CI)) Dice similarity index (DSI) (2 ∗|Auto ∩ Manual|/(|Auto|+|Manual|)∗ 100) between manual and automated segmentations for the MM and LM volumes were 78.3% (75.0-78.7), 83.9% (82.1-83.9) at baseline and 75.3% (72.8-76.9), 83.0% (81.6-83.5) at 12 months. Pearson coefficients between automatic and manual segmentation parameters ranged from r = 0.70 to r = 0.92. MM in rOA/mJSN knees had significantly greater subluxation and smaller tibial-coverage than no-rOA/no-mJSN knees. LM in rOA knees had significantly greater volumes and tibial-coverage than no-rOA knees. Our automated method successfully segmented the menisci in normal and osteoarthritic knee MR images and detected meaningful morphological differences with respect to rOA and joint space narrowing (JSN). Our approach will facilitate analyses of the menisci in prospective MR cohorts such as the OAI for investigations into pathophysiological changes occurring in early osteoarthritis (OA) development. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long
2015-10-01
The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)
A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.
Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K
2014-05-01
Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.
Joint Multi-Leaf Segmentation, Alignment, and Tracking for Fluorescence Plant Videos.
Yin, Xi; Liu, Xiaoming; Chen, Jin; Kramer, David M
2018-06-01
This paper proposes a novel framework for fluorescence plant video processing. The plant research community is interested in the leaf-level photosynthetic analysis within a plant. A prerequisite for such analysis is to segment all leaves, estimate their structures, and track them over time. We identify this as a joint multi-leaf segmentation, alignment, and tracking problem. First, leaf segmentation and alignment are applied on the last frame of a plant video to find a number of well-aligned leaf candidates. Second, leaf tracking is applied on the remaining frames with leaf candidate transformation from the previous frame. We form two optimization problems with shared terms in their objective functions for leaf alignment and tracking respectively. A quantitative evaluation framework is formulated to evaluate the performance of our algorithm with four metrics. Two models are learned to predict the alignment accuracy and detect tracking failure respectively in order to provide guidance for subsequent plant biology analysis. The limitation of our algorithm is also studied. Experimental results show the effectiveness, efficiency, and robustness of the proposed method.
Damman, Peter; Holmvang, Lene; Tijssen, Jan G P; Lagerqvist, Bo; Clayton, Tim C; Pocock, Stuart J; Windhausen, Fons; Hirsch, Alexander; Fox, Keith A A; Wallentin, Lars; de Winter, Robbert J
2012-01-01
The aim of this study was to evaluate the independent prognostic value of qualitative and quantitative admission electrocardiographic (ECG) analysis regarding long-term outcomes after non-ST-segment elevation acute coronary syndromes (NSTE-ACS). From the Fragmin and Fast Revascularization During Instability in Coronary Artery Disease (FRISC II), Invasive Versus Conservative Treatment in Unstable Coronary Syndromes (ICTUS), and Randomized Intervention Trial of Unstable Angina 3 (RITA-3) patient-pooled database, 5,420 patients with NSTE-ACS with qualitative ECG data, of whom 2,901 had quantitative data, were included in this analysis. The main outcome was 5-year cardiovascular death or myocardial infarction. Hazard ratios (HRs) were calculated with Cox regression models, and adjustments were made for established outcome predictors. The additional discriminative value was assessed with the category-less net reclassification improvement and integrated discrimination improvement indexes. In the 5,420 patients, the presence of ST-segment depression (≥1 mm; adjusted HR 1.43, 95% confidence interval [CI] 1.25 to 1.63) and left bundle branch block (adjusted HR 1.64, 95% CI 1.18 to 2.28) were independently associated with long-term cardiovascular death or myocardial infarction. Risk increases were short and long term. On quantitative ECG analysis, cumulative ST-segment depression (≥5 mm; adjusted HR 1.34, 95% CI 1.05 to 1.70), the presence of left bundle branch block (adjusted HR 2.15, 95% CI 1.36 to 3.40) or ≥6 leads with inverse T waves (adjusted HR 1.22, 95% CI 0.97 to 1.55) was independently associated with long-term outcomes. No interaction was observed with treatment strategy. No improvements in net reclassification improvement and integrated discrimination improvement were observed after the addition of quantitative characteristics to a model including qualitative characteristics. In conclusion, in the FRISC II, ICTUS, and RITA-3 NSTE-ACS patient-pooled data set, admission ECG characteristics provided long-term prognostic value for cardiovascular death or myocardial infarction. Quantitative ECG characteristics provided no incremental discrimination compared to qualitative data. Copyright © 2012 Elsevier Inc. All rights reserved.
Sidman, Richard L.
1957-01-01
Fragments of freshly obtained retinas of several vertebrate species were studied by refractometry, with reference to the structure of the rods and cones. The findings allowed a reassessment of previous descriptions based mainly on fixed material. The refractometric method was used also to measure the refractice indices and to calculate the concentrations of solids and water in the various cell segments. The main quantitative data were confirmed by interference microscopy. When examined by the method of refractometry the outer segments of freshly prepared retinal rods appear homogeneous. Within a few minutes a single eccentric longitudinal fiber appears, and transverse striations may develop. These changes are attributed to imbibition of water and swelling in structures normally too small for detection by light microscopy. The central "core" of outer segments and the chromophobic disc between outer and inner segments appear to be artifacts resulting from shrinkage during dehydration. The fresh outer segments of cones, and the inner segments of rods and cones also are described and illustrated. The volumes, refractive indices, concentrations of solids, and wet and dry weights of various segments of the photoreceptor cells were tabulated. Rod outer segments of the different species vary more than 100-fold in volume and mass but all have concentrations of solids of 40 to 43 per cent. Cone outer segments contain only about 30 per cent solids. The myoids, paraboloids, and ellipsoids of the inner segments likewise have characteristic refractive indices and concentrations of solids. Some of the limitations and particular virtues of refractometry as a method for quantitative analysis of living cells are discussed in comparison with more conventional biochemical techniques. Also the shapes and refractive indices of the various segments of photoreceptor cells are considered in relation to the absorption and transmission of light. The Stiles-Crawford effect can be accounted for on the basis of the structure of cone cells. PMID:13416308
Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.
Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike
2010-01-01
An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.
An, Yeong Yi; Kim, Sung Hun; Kang, Bong Joo
2017-01-01
To determine the added value of qualitative analysis as an adjunct to quantitative analysis for the discrimination of benign and malignant lesions in patients with breast cancer using diffusion-weighted imaging (DWI) with readout-segmented echo-planar imaging (rs-EPI). A total of 99 patients with 144 lesions were reviewed from our prospectively collected database. DWI data were obtained using rs-EPI acquired at 3.0 T. The diagnostic performances of DWI in the qualitative, quantitative, and combination analyses were compared with that of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Additionally, the effect of lesion size on the diagnostic performance of the DWI combination analysis was evaluated. The strongest indicators of malignancy on DWI were a heterogeneous pattern (P = 0.005) and an apparent diffusion coefficient (ADC) value <1.0 × 10-3 mm2/sec (P = 0.002). The area under the curve (AUC) values for the qualitative analysis, quantitative analysis, and combination analysis on DWI were 0.732 (95% CI, 0.651-0.803), 0.780 (95% CI, 0.703-0.846), and 0.826 (95% CI, 0.754-0.885), respectively (P<0.0001). The AUC for the combination analysis on DWI was superior to that for DCE-MRI alone (0.651, P = 0.003) but inferior to that for DCE-MRI plus the ADC value (0.883, P = 0.03). For the DWI combination analysis, the sensitivity was significantly lower in the size ≤1 cm group than in the size >1 cm group (80% vs. 95.6%, P = 0.034). Qualitative analysis of tumor morphology was diagnostically applicable on DWI using rs-EPI. This qualitative analysis adds value to quantitative analyses for lesion characterization in patients with breast cancer.
Automatic tissue segmentation of breast biopsies imaged by QPI
NASA Astrophysics Data System (ADS)
Majeed, Hassaan; Nguyen, Tan; Kandel, Mikhail; Marcias, Virgilia; Do, Minh; Tangella, Krishnarao; Balla, Andre; Popescu, Gabriel
2016-03-01
The current tissue evaluation method for breast cancer would greatly benefit from higher throughput and less inter-observer variation. Since quantitative phase imaging (QPI) measures physical parameters of tissue, it can be used to find quantitative markers, eliminating observer subjectivity. Furthermore, since the pixel values in QPI remain the same regardless of the instrument used, classifiers can be built to segment various tissue components without need for color calibration. In this work we use a texton-based approach to segment QPI images of breast tissue into various tissue components (epithelium, stroma or lumen). A tissue microarray comprising of 900 unstained cores from 400 different patients was imaged using Spatial Light Interference Microscopy. The training data were generated by manually segmenting the images for 36 cores and labelling each pixel (epithelium, stroma or lumen.). For each pixel in the data, a response vector was generated by the Leung-Malik (LM) filter bank and these responses were clustered using the k-means algorithm to find the centers (called textons). A random forest classifier was then trained to find the relationship between a pixel's label and the histogram of these textons in that pixel's neighborhood. The segmentation was carried out on the validation set by calculating the texton histogram in a pixel's neighborhood and generating a label based on the model learnt during training. Segmentation of the tissue into various components is an important step toward efficiently computing parameters that are markers of disease. Automated segmentation, followed by diagnosis, can improve the accuracy and speed of analysis leading to better health outcomes.
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
Dynamics of uniaxially oriented elastomers using dielectric spectroscopy
NASA Astrophysics Data System (ADS)
Lee, Hyungki; Fragiadakis, Daniel; Martin, Darren; Runt, James
2009-03-01
We summarize our initial dielectric spectroscopy investigation of the dynamics of oriented segmented polyurethanes and crosslinked polyisoprene elastomers. A specially designed uniaxial stretching rig is used to control the draw ratio, and the electric field is applied normal to the draw direction. For the segmented PUs, we observe a dramatic reduction in relaxation strength of the soft phase segmental process with increasing extension ratio, accompanied by a modest decrease in relaxation frequency. Crosslinking of the polyisoprene was accomplished with dicumyl peroxide and the dynamics of uncrosslinked and crosslinked versions are investigated in the undrawn state and at different extension ratios. Complimentary analysis of the crosslinked PI is conducted with wide angle X- ray diffraction to examine possible strain-induced crystallization, DSC, and swelling experiments. Quantitative analysis of relaxation strengths and shapes as a function of draw ratio will be discussed.
NASA Astrophysics Data System (ADS)
Chien, Kuang-Che Chang; Tu, Han-Yen; Hsieh, Ching-Huang; Cheng, Chau-Jern; Chang, Chun-Yen
2018-01-01
This study proposes a regional fringe analysis (RFA) method to detect the regions of a target object in captured shifted images to improve depth measurement in phase-shifting fringe projection profilometry (PS-FPP). In the RFA method, region-based segmentation is exploited to segment the de-fringed image of a target object, and a multi-level fuzzy-based classification with five presented features is used to analyze and discriminate the regions of an object from the segmented regions, which were associated with explicit fringe information. Then, in the experiment, the performance of the proposed method is tested and evaluated on 26 test cases made of five types of materials. The qualitative and quantitative results demonstrate that the proposed RFA method can effectively detect the desired regions of an object to improve depth measurement in the PS-FPP system.
Computer-assisted segmentation of white matter lesions in 3D MR images using support vector machine.
Lao, Zhiqiang; Shen, Dinggang; Liu, Dengfeng; Jawad, Abbas F; Melhem, Elias R; Launer, Lenore J; Bryan, R Nick; Davatzikos, Christos
2008-03-01
Brain lesions, especially white matter lesions (WMLs), are associated with cardiac and vascular disease, but also with normal aging. Quantitative analysis of WML in large clinical trials is becoming more and more important. In this article, we present a computer-assisted WML segmentation method, based on local features extracted from multiparametric magnetic resonance imaging (MRI) sequences (ie, T1-weighted, T2-weighted, proton density-weighted, and fluid attenuation inversion recovery MRI scans). A support vector machine classifier is first trained on expert-defined WMLs, and is then used to classify new scans. Postprocessing analysis further reduces false positives by using anatomic knowledge and measures of distance from the training set. Cross-validation on a population of 35 patients from three different imaging sites with WMLs of varying sizes, shapes, and locations tests the robustness and accuracy of the proposed segmentation method, compared with the manual segmentation results from two experienced neuroradiologists.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Herrera, Victoria L; Pasion, Khristine A; Tan, Glaiza A; Ruiz-Opazo, Nelson
2013-01-01
A quantitative trait locus (QTL) linked with ability to find a platform in the Morris Water Maze (MWM) was located on chromosome 17 (Nav-5 QTL) using intercross between Dahl S and Dahl R rats. We developed two congenic strains, S.R17A and S.R17B introgressing Dahl R-chromosome 17 segments into Dahl S chromosome 17 region spanning putative Nav-5 QTL. Performance analysis of S.R17A, S.R17B and Dahl S rats in the Morris water maze (MWM) task showed a significantly decreased spatial navigation performance in S.R17B congenic rats when compared with Dahl S controls (P = 0.02). The S.R17A congenic segment did not affect MWM performance delimiting Nav-5 to the chromosome 17 65.02-74.66 Mbp region. Additional fine mapping is necessary to identify the specific gene variant accounting for Nav-5 effect on spatial learning and memory in Dahl rats.
2011-01-01
Background Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction. Methods A proposed method with anisotropic diffusion as pre-processing and a novel Bounded Area Elimination (BAE) post-processing algorithm to improve the algorithm of ossification site localization technique are designed with the intent of improving the adaptive segmentation result and the region-of interest (ROI) localization accuracy. Results The results are then evaluated by quantitative analysis and qualitative analysis using texture feature evaluation. The result indicates that the image homogeneity after anisotropic diffusion has improved averagely on each age group for 17.59%. Results of experiments showed that the smoothness has been improved averagely 35% after BAE algorithm and the improvement of ROI localization has improved for averagely 8.19%. The MSSIM has improved averagely 10.49% after performing the BAE algorithm on the adaptive segmented hand radiograph. Conclusions The result indicated that hand radiographs which have undergone anisotropic diffusion have greatly reduced the noise in the segmented image and the result as well indicated that the BAE algorithm proposed is capable of removing the artifacts generated in adaptive segmentation. PMID:21952080
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
Faraji, Amir H; Abhinav, Kumar; Jarbo, Kevin; Yeh, Fang-Cheng; Shin, Samuel S; Pathak, Sudhir; Hirsch, Barry E; Schneider, Walter; Fernandez-Miranda, Juan C; Friedlander, Robert M
2015-11-01
Brainstem cavernous malformations (CMs) are challenging due to a higher symptomatic hemorrhage rate and potential morbidity associated with their resection. The authors aimed to preoperatively define the relationship of CMs to the perilesional corticospinal tracts (CSTs) by obtaining qualitative and quantitative data using high-definition fiber tractography. These data were examined postoperatively by using longitudinal scans and in relation to patients' symptomatology. The extent of involvement of the CST was further evaluated longitudinally using the automated "diffusion connectometry" analysis. Fiber tractography was performed with DSI Studio using a quantitative anisotropy (QA)-based generalized deterministic tracking algorithm. Qualitatively, CST was classified as being "disrupted" and/or "displaced." Quantitative analysis involved obtaining mean QA values for the CST and its perilesional and nonperilesional segments. The contralateral CST was used for comparison. Diffusion connectometry analysis included comparison of patients' data with a template from 90 normal subjects. Three patients (mean age 22 years) with symptomatic pontomesencephalic hemorrhagic CMs and varying degrees of hemiparesis were identified. The mean follow-up period was 37.3 months. Qualitatively, CST was partially disrupted and displaced in all. Direction of the displacement was different in each case and progressively improved corresponding with the patient's neurological status. No patient experienced neurological decline related to the resection. The perilesional mean QA percentage decreases supported tract disruption and decreased further over the follow-up period (Case 1, 26%-49%; Case 2, 35%-66%; and Case 3, 63%-78%). Diffusion connectometry demonstrated rostrocaudal involvement of the CST consistent with the quantitative data. Hemorrhagic brainstem CMs can disrupt and displace perilesional white matter tracts with the latter occurring in unpredictable directions. This requires the use of tractography to accurately define their orientation to optimize surgical entry point, minimize morbidity, and enhance neurological outcomes. Observed anisotropy decreases in the perilesional segments are consistent with neural injury following hemorrhagic insults. A model using these values in different CST segments can be used to longitudinally monitor its craniocaudal integrity. Diffusion connectometry is a complementary approach providing longitudinal information on the rostrocaudal involvement of the CST.
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.
A top-down manner-based DCNN architecture for semantic image segmentation.
Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin
2017-01-01
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogunovic, Hrvoje; Pozo, Jose Maria; Villa-Uriol, Maria Cruz
Purpose: To evaluate the suitability of an improved version of an automatic segmentation method based on geodesic active regions (GAR) for segmenting cerebral vasculature with aneurysms from 3D x-ray reconstruction angiography (3DRA) and time of flight magnetic resonance angiography (TOF-MRA) images available in the clinical routine. Methods: Three aspects of the GAR method have been improved: execution time, robustness to variability in imaging protocols, and robustness to variability in image spatial resolutions. The improved GAR was retrospectively evaluated on images from patients containing intracranial aneurysms in the area of the Circle of Willis and imaged with two modalities: 3DRA andmore » TOF-MRA. Images were obtained from two clinical centers, each using different imaging equipment. Evaluation included qualitative and quantitative analyses of the segmentation results on 20 images from 10 patients. The gold standard was built from 660 cross-sections (33 per image) of vessels and aneurysms, manually measured by interventional neuroradiologists. GAR has also been compared to an interactive segmentation method: isointensity surface extraction (ISE). In addition, since patients had been imaged with the two modalities, we performed an intermodality agreement analysis with respect to both the manual measurements and each of the two segmentation methods. Results: Both GAR and ISE differed from the gold standard within acceptable limits compared to the imaging resolution. GAR (ISE) had an average accuracy of 0.20 (0.24) mm for 3DRA and 0.27 (0.30) mm for TOF-MRA, and had a repeatability of 0.05 (0.20) mm. Compared to ISE, GAR had a lower qualitative error in the vessel region and a lower quantitative error in the aneurysm region. The repeatability of GAR was superior to manual measurements and ISE. The intermodality agreement was similar between GAR and the manual measurements. Conclusions: The improved GAR method outperformed ISE qualitatively as well as quantitatively and is suitable for segmenting 3DRA and TOF-MRA images from clinical routine.« less
Stephen, Renu M.; Jha, Abhinav K.; Roe, Denise J.; Trouard, Theodore P.; Galons, Jean-Philippe; Kupinski, Matthew A.; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D.; Rodriguez, Jeffrey J.; Gillies, Robert J.; Stopeck, Alison T.
2015-01-01
Purpose To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Methods Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450 s/mm2 at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. Results A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2–5 cm in size (p = 0.002), but not for heavily treated patients with the same tumor size range (p = 0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33 μm2/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2–5 cm liver lesions. Conclusion Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. PMID:26284600
Stephen, Renu M; Jha, Abhinav K; Roe, Denise J; Trouard, Theodore P; Galons, Jean-Philippe; Kupinski, Matthew A; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D; Rodriguez, Jeffrey J; Gillies, Robert J; Stopeck, Alison T
2015-12-01
To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450s/mm(2) at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2-5cm in size (p=0.002), but not for heavily treated patients with the same tumor size range (p=0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33μm(2)/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2-5cm liver lesions. Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. Copyright © 2015 Elsevier Inc. All rights reserved.
Lee, Alex Pui-Wai; Fang, Fang; Jin, Chun-Na; Kam, Kevin Ka-Ho; Tsui, Gary K W; Wong, Kenneth K Y; Looi, Jen-Li; Wong, Randolph H L; Wan, Song; Sun, Jing Ping; Underwood, Malcolm J; Yu, Cheuk-Man
2014-01-01
The mitral valve (MV) has complex 3-dimensional (3D) morphology and motion. Advance in real-time 3D echocardiography (RT3DE) has revolutionized clinical imaging of the MV by providing clinicians with realistic visualization of the valve. Thus far, RT3DE of the MV structure and dynamics has adopted an approach that depends largely on subjective and qualitative interpretation of the 3D images of the valve, rather than objective and reproducible measurement. RT3DE combined with image-processing computer techniques provides precise segmentation and reliable quantification of the complex 3D morphology and rapid motion of the MV. This new approach to imaging may provide additional quantitative descriptions that are useful in diagnostic and therapeutic decision-making. Quantitative analysis of the MV using RT3DE has increased our understanding of the pathologic mechanism of degenerative, ischemic, functional, and rheumatic MV disease. Most recently, 3D morphologic quantification has entered into clinical use to provide more accurate diagnosis of MV disease and for planning surgery and transcatheter interventions. Current limitations of this quantitative approach to MV imaging include labor-intensiveness during image segmentation and lack of a clear definition of the clinical significance of many of the morphologic parameters. This review summarizes the current development and applications of quantitative analysis of the MV morphology using RT3DE.
Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard. PMID:27257542
ERIC Educational Resources Information Center
Russak, Susie; Saiegh-Haddad, Elinor
2017-01-01
This article examines the effect of phonological context (singleton vs. clustered consonants) on full phoneme segmentation in Hebrew first language (L1) and in English second language (L2) among typically reading adults (TR) and adults with reading disability (RD) (n = 30 per group), using quantitative analysis and a fine-grained analysis of…
Analysis of normal human retinal vascular network architecture using multifractal geometry
Ţălu, Ştefan; Stach, Sebastian; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina; Nicoară, Simona Delia
2017-01-01
AIM To apply the multifractal analysis method as a quantitative approach to a comprehensive description of the microvascular network architecture of the normal human retina. METHODS Fifty volunteers were enrolled in this study in the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and January 2014. A set of 100 segmented and skeletonised human retinal images, corresponding to normal states of the retina were studied. An automatic unsupervised method for retinal vessel segmentation was applied before multifractal analysis. The multifractal analysis of digital retinal images was made with computer algorithms, applying the standard box-counting method. Statistical analyses were performed using the GraphPad InStat software. RESULTS The architecture of normal human retinal microvascular network was able to be described using the multifractal geometry. The average of generalized dimensions (Dq) for q=0, 1, 2, the width of the multifractal spectrum (Δα=αmax − αmin) and the spectrum arms' heights difference (|Δf|) of the normal images were expressed as mean±standard deviation (SD): for segmented versions, D0=1.7014±0.0057; D1=1.6507±0.0058; D2=1.5772±0.0059; Δα=0.92441±0.0085; |Δf|= 0.1453±0.0051; for skeletonised versions, D0=1.6303±0.0051; D1=1.6012±0.0059; D2=1.5531±0.0058; Δα=0.65032±0.0162; |Δf|= 0.0238±0.0161. The average of generalized dimensions (Dq) for q=0, 1, 2, the width of the multifractal spectrum (Δα) and the spectrum arms' heights difference (|Δf|) of the segmented versions was slightly greater than the skeletonised versions. CONCLUSION The multifractal analysis of fundus photographs may be used as a quantitative parameter for the evaluation of the complex three-dimensional structure of the retinal microvasculature as a potential marker for early detection of topological changes associated with retinal diseases. PMID:28393036
Automatic pelvis segmentation from x-ray images of a mouse model
NASA Astrophysics Data System (ADS)
Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham
2017-05-01
The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.
Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images.
Rangel-Fonseca, Piero; Gómez-Vieyra, Armando; Malacara-Hernández, Daniel; Wilson, Mario C; Williams, David R; Rossi, Ethan A
2013-12-01
Adaptive optics (AO) imaging methods allow the histological characteristics of retinal cell mosaics, such as photoreceptors and retinal pigment epithelium (RPE) cells, to be studied in vivo. The high-resolution images obtained with ophthalmic AO imaging devices are rich with information that is difficult and/or tedious to quantify using manual methods. Thus, robust, automated analysis tools that can provide reproducible quantitative information about the cellular mosaics under examination are required. Automated algorithms have been developed to detect the position of individual photoreceptor cells; however, most of these methods are not well suited for characterizing the RPE mosaic. We have developed an algorithm for RPE cell segmentation and show its performance here on simulated and real fluorescence AO images of the RPE mosaic. Algorithm performance was compared to manual cell identification and yielded better than 91% correspondence. This method can be used to segment RPE cells for morphometric analysis of the RPE mosaic and speed the analysis of both healthy and diseased RPE mosaics.
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
Feng, Xiang; Deistung, Andreas; Dwyer, Michael G; Hagemeier, Jesper; Polak, Paul; Lebenberg, Jessica; Frouin, Frédérique; Zivadinov, Robert; Reichenbach, Jürgen R; Schweser, Ferdinand
2017-06-01
Accurate and robust segmentation of subcortical gray matter (SGM) nuclei is required in many neuroimaging applications. FMRIB's Integrated Registration and Segmentation Tool (FIRST) is one of the most popular software tools for automated subcortical segmentation based on T 1 -weighted (T1w) images. In this work, we demonstrate that FIRST tends to produce inaccurate SGM segmentation results in the case of abnormal brain anatomy, such as present in atrophied brains, due to a poor spatial match of the subcortical structures with the training data in the MNI space as well as due to insufficient contrast of SGM structures on T1w images. Consequently, such deviations from the average brain anatomy may introduce analysis bias in clinical studies, which may not always be obvious and potentially remain unidentified. To improve the segmentation of subcortical nuclei, we propose to use FIRST in combination with a special Hybrid image Contrast (HC) and Non-Linear (nl) registration module (HC-nlFIRST), where the hybrid image contrast is derived from T1w images and magnetic susceptibility maps to create subcortical contrast that is similar to that in the Montreal Neurological Institute (MNI) template. In our approach, a nonlinear registration replaces FIRST's default linear registration, yielding a more accurate alignment of the input data to the MNI template. We evaluated our method on 82 subjects with particularly abnormal brain anatomy, selected from a database of >2000 clinical cases. Qualitative and quantitative analyses revealed that HC-nlFIRST provides improved segmentation compared to the default FIRST method. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas
2010-03-01
Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.
Automated Quantitative Nuclear Cardiology Methods
Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.
2016-01-01
Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779
A method for evaluating the murine pulmonary vasculature using micro-computed tomography.
Phillips, Michael R; Moore, Scott M; Shah, Mansi; Lee, Clara; Lee, Yueh Z; Faber, James E; McLean, Sean E
2017-01-01
Significant mortality and morbidity are associated with alterations in the pulmonary vasculature. While techniques have been described for quantitative morphometry of whole-lung arterial trees in larger animals, no methods have been described in mice. We report a method for the quantitative assessment of murine pulmonary arterial vasculature using high-resolution computed tomography scanning. Mice were harvested at 2 weeks, 4 weeks, and 3 months of age. The pulmonary artery vascular tree was pressure perfused to maximal dilation with a radio-opaque casting material with viscosity and pressure set to prevent capillary transit and venous filling. The lungs were fixed and scanned on a specimen computed tomography scanner at 8-μm resolution, and the vessels were segmented. Vessels were grouped into categories based on lumen diameter and branch generation. Robust high-resolution segmentation was achieved, permitting detailed quantitation of pulmonary vascular morphometrics. As expected, postnatal lung development was associated with progressive increase in small-vessel number and arterial branching complexity. These methods for quantitative analysis of the pulmonary vasculature in postnatal and adult mice provide a useful tool for the evaluation of mouse models of disease that affect the pulmonary vasculature. Copyright © 2016 Elsevier Inc. All rights reserved.
Interactive tele-radiological segmentation systems for treatment and diagnosis.
Zimeras, S; Gortzis, L G
2012-01-01
Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor's opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools), manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Yongjun
Purpose: In patients with chronic obstructive pulmonary disease (COPD), diaphragm function may deteriorate due to reduced muscle fiber length. Quantitative analysis of the morphology of the diaphragm is therefore important. In the authors current study, they propose a diaphragm segmentation method for COPD patients that uses volumetric chest computed tomography (CT) data, and they provide a quantitative analysis of the diaphragmatic dimensions. Methods: Volumetric CT data were obtained from 30 COPD patients and 10 normal control patients using a 16-row multidetector CT scanner (Siemens Sensation 16) with 0.75-mm collimation. Diaphragm segmentation using 3D ray projections on the lower surface ofmore » the lungs was performed to identify the draft diaphragmatic lung surface, which was modeled using quadratic 3D surface fitting and robust regression in order to minimize the effects of segmentation error and parameterize diaphragm morphology. This result was visually evaluated by an expert thoracic radiologist. To take into consideration the shape features of the diaphragm, several quantification parameters—including the shape index on the apex (SIA) (which was computed using gradient set to 0), principal curvatures on the apex on the fitted diaphragm surface (CA), the height between the apex and the base plane (H), the diaphragm lengths along the x-, y-, and z-axes (XL, YL, ZL), quadratic-fitted diaphragm lengths on the z-axis (FZL), average curvature (C), and surface area (SA)—were measured using in-house software and compared with the pulmonary function test (PFT) results. Results: The overall accuracy of the combined segmentation method was 97.22% ± 4.44% while the visual accuracy of the models for the segmented diaphragms was 95.28% ± 2.52% (mean ± SD). The quantitative parameters, including SIA, CA, H, XL, YL, ZL, FZL, C, and SA were 0.85 ± 0.05 (mm{sup −1}), 0.01 ± 0.00 (mm{sup −1}), 17.93 ± 10.78 (mm), 129.80 ± 11.66 (mm), 163.19 ± 13.45 (mm), 71.27 ± 17.52 (mm), 61.59 ± 16.98 (mm), 0.01 ± 0.00 (mm{sup −1}), and 34 380.75 ± 6680.06 (mm{sup 2}), respectively. Several parameters were correlated with the PFT parameters. Conclusions: The authors propose an automatic method for quantitatively evaluating the morphological parameters of the diaphragm on volumetric chest CT in COPD patients. By measuring not only the conventional length and surface area but also the shape features of the diaphragm using quadratic 3D surface modeling, the proposed method is especially useful for quantifying diaphragm characteristics. Their method may be useful for assessing morphological diaphragmatic changes in COPD patients.« less
Poon, Candice C; Ebacher, Vincent; Liu, Katherine; Yong, Voon Wee; Kelly, John James Patrick
2018-05-03
Automated slide scanning and segmentation of fluorescently-labeled tissues is the most efficient way to analyze whole slides or large tissue sections. Unfortunately, many researchers spend large amounts of time and resources developing and optimizing workflows that are only relevant to their own experiments. In this article, we describe a protocol that can be used by those with access to a widefield high-content analysis system (WHCAS) to image any slide-mounted tissue, with options for customization within pre-built modules found in the associated software. Not originally intended for slide scanning, the steps detailed in this article make it possible to acquire slide scanning images in the WHCAS which can be imported into the associated software. In this example, the automated segmentation of brain tumor slides is demonstrated, but the automated segmentation of any fluorescently-labeled nuclear or cytoplasmic marker is possible. Furthermore, there are a variety of other quantitative software modules including assays for protein localization/translocation, cellular proliferation/viability/apoptosis, and angiogenesis that can be run. This technique will save researchers time and effort and create an automated protocol for slide analysis.
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
NASA Astrophysics Data System (ADS)
Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae
2008-03-01
Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.
Evaluation metrics for bone segmentation in ultrasound
NASA Astrophysics Data System (ADS)
Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas
2015-03-01
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
Automated 3D renal segmentation based on image partitioning
NASA Astrophysics Data System (ADS)
Yeghiazaryan, Varduhi; Voiculescu, Irina D.
2016-03-01
Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.
Rezeli, Melinda; Sjödin, Karin; Lindberg, Henrik; Gidlöf, Olof; Lindahl, Bertil; Jernberg, Tomas; Spaak, Jonas; Erlinge, David; Marko-Varga, György
2017-09-01
A multiple reaction monitoring (MRM) assay was developed for precise quantitation of 87 plasma proteins including the three isoforms of apolipoprotein E (APOE) associated with cardiovascular diseases using nanoscale liquid chromatography separation and stable isotope dilution strategy. The analytical performance of the assay was evaluated and we found an average technical variation of 4.7% in 4-5 orders of magnitude dynamic range (≈0.2 mg/L to 4.5 g/L) from whole plasma digest. Here, we report a complete workflow, including sample processing adapted to 96-well plate format and normalization strategy for large-scale studies. To further investigate the MS-based quantitation the amount of six selected proteins was measured by routinely used clinical chemistry assays as well and the two methods showed excellent correlation with high significance (p-value < 10e-5) for the six proteins, in addition for the cardiovascular predictor factor, APOB: APOA1 ratio (r = 0.969, p-value < 10e-5). Moreover, we utilized the developed assay for screening of biobank samples from patients with myocardial infarction and performed the comparative analysis of patient groups with STEMI (ST- segment elevation myocardial infarction), NSTEMI (non ST- segment elevation myocardial infarction) and type-2 AMI (type-2 myocardial infarction) patients.
Domingo-Almenara, Xavier; Perera, Alexandre; Brezmes, Jesus
2016-11-25
Gas chromatography-mass spectrometry (GC-MS) produces large and complex datasets characterized by co-eluted compounds and at trace levels, and with a distinct compound ion-redundancy as a result of the high fragmentation by the electron impact ionization. Compounds in GC-MS can be resolved by taking advantage of the multivariate nature of GC-MS data by applying multivariate resolution methods. However, multivariate methods have to be applied in small regions of the chromatogram, and therefore chromatograms are segmented prior to the application of the algorithms. The automation of this segmentation process is a challenging task as it implies separating between informative data and noise from the chromatogram. This study demonstrates the capabilities of independent component analysis-orthogonal signal deconvolution (ICA-OSD) and multivariate curve resolution-alternating least squares (MCR-ALS) with an overlapping moving window implementation to avoid the typical hard chromatographic segmentation. Also, after being resolved, compounds are aligned across samples by an automated alignment algorithm. We evaluated the proposed methods through a quantitative analysis of GC-qTOF MS data from 25 serum samples. The quantitative performance of both moving window ICA-OSD and MCR-ALS-based implementations was compared with the quantification of 33 compounds by the XCMS package. Results shown that most of the R 2 coefficients of determination exhibited a high correlation (R 2 >0.90) in both ICA-OSD and MCR-ALS moving window-based approaches. Copyright © 2016 Elsevier B.V. All rights reserved.
Objective measurement of accommodative biometric changes using ultrasound biomicroscopy
Ramasubramanian, Viswanathan; Glasser, Adrian
2015-01-01
PURPOSE To demonstrate that ultrasound biomicroscopy (UBM) can be used for objective quantitative measurements of anterior segment accommodative changes. SETTING College of Optometry, University of Houston, Houston, Texas, USA. DESIGN Prospective cross-sectional study. METHODS Anterior segment biometric changes in response to 0 to 6.0 diopters (D) of accommodative stimuli in 1.0 D steps were measured in eyes of human subjects aged 21 to 36 years. Imaging was performed in the left eye using a 35 MHz UBM (Vumax) and an A-scan ultrasound (A-5500) while the right eye viewed the accommodative stimuli. An automated Matlab image-analysis program was developed to measure the biometry parameters from the UBM images. RESULTS The UBM-measured accommodative changes in anterior chamber depth (ACD), lens thickness, anterior lens radius of curvature, posterior lens radius of curvature, and anterior segment length were statistically significantly (P < .0001) linearly correlated with accommodative stimulus amplitudes. Standard deviations of the UBM-measured parameters were independent of the accommodative stimulus demands (ACD 0.0176 mm, lens thickness 0.0294 mm, anterior lens radius of curvature 0.3350 mm, posterior lens radius of curvature 0.1580 mm, and anterior segment length 0.0340 mm). The mean difference between the A-scan and UBM measurements was −0.070 mm for ACD and 0.166 mm for lens thickness. CONCLUSIONS Accommodating phakic eyes imaged using UBM allowed visualization of the accommodative response, and automated image analysis of the UBM images allowed reliable, objective, quantitative measurements of the accommodative intraocular biometric changes. PMID:25804579
CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation
2013-01-01
The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening. PMID:23938087
CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.
Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid
2013-08-09
: The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening.
Chapiro, Julius; Wood, Laura D.; Lin, MingDe; Duran, Rafael; Cornish, Toby; Lesage, David; Charu, Vivek; Schernthaner, Rüdiger; Wang, Zhijun; Tacher, Vania; Savic, Lynn Jeanette; Kamel, Ihab R.
2014-01-01
Purpose To evaluate the diagnostic performance of three-dimensional (3Dthree-dimensional) quantitative enhancement-based and diffusion-weighted volumetric magnetic resonance (MR) imaging assessment of hepatocellular carcinoma (HCChepatocellular carcinoma) lesions in determining the extent of pathologic tumor necrosis after transarterial chemoembolization (TACEtransarterial chemoembolization). Materials and Methods This institutional review board–approved retrospective study included 17 patients with HCChepatocellular carcinoma who underwent TACEtransarterial chemoembolization before surgery. Semiautomatic 3Dthree-dimensional volumetric segmentation of target lesions was performed at the last MR examination before orthotopic liver transplantation or surgical resection. The amount of necrotic tumor tissue on contrast material–enhanced arterial phase MR images and the amount of diffusion-restricted tumor tissue on apparent diffusion coefficient (ADCapparent diffusion coefficient) maps were expressed as a percentage of the total tumor volume. Visual assessment of the extent of tumor necrosis and tumor response according to European Association for the Study of the Liver (EASLEuropean Association for the Study of the Liver) criteria was performed. Pathologic tumor necrosis was quantified by using slide-by-slide segmentation. Correlation analysis was performed to evaluate the predictive values of the radiologic techniques. Results At histopathologic examination, the mean percentage of tumor necrosis was 70% (range, 10%–100%). Both 3Dthree-dimensional quantitative techniques demonstrated a strong correlation with tumor necrosis at pathologic examination (R2 = 0.9657 and R2 = 0.9662 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively) and a strong intermethod agreement (R2 = 0.9585). Both methods showed a significantly lower discrepancy with pathologically measured necrosis (residual standard error [RSEresidual standard error] = 6.38 and 6.33 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively), when compared with non-3Dthree-dimensional techniques (RSEresidual standard error = 12.18 for visual assessment). Conclusion This radiologic-pathologic correlation study demonstrates the diagnostic accuracy of 3Dthree-dimensional quantitative MR imaging techniques in identifying pathologically measured tumor necrosis in HCChepatocellular carcinoma lesions treated with TACEtransarterial chemoembolization. © RSNA, 2014 Online supplemental material is available for this article. PMID:25028783
Retina vascular network recognition
NASA Astrophysics Data System (ADS)
Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo
1993-09-01
The analysis of morphological and structural modifications of the retina vascular network is an interesting investigation method in the study of diabetes and hypertension. Normally this analysis is carried out by qualitative evaluations, according to standardized criteria, though medical research attaches great importance to quantitative analysis of vessel color, shape and dimensions. The paper describes a system which automatically segments and recognizes the ocular fundus circulation and micro circulation network, and extracts a set of features related to morphometric aspects of vessels. For this class of images the classical segmentation methods seem weak. We propose a computer vision system in which segmentation and recognition phases are strictly connected. The system is hierarchically organized in four modules. Firstly the Image Enhancement Module (IEM) operates a set of custom image enhancements to remove blur and to prepare data for subsequent segmentation and recognition processes. Secondly the Papilla Border Analysis Module (PBAM) automatically recognizes number, position and local diameter of blood vessels departing from optical papilla. Then the Vessel Tracking Module (VTM) analyses vessels comparing the results of body and edge tracking and detects branches and crossings. Finally the Feature Extraction Module evaluates PBAM and VTM output data and extracts some numerical indexes. Used algorithms appear to be robust and have been successfully tested on various ocular fundus images.
Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una
2018-02-01
Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
Quantitative MR assessment of structural changes in white matter of children treated for ALL
NASA Astrophysics Data System (ADS)
Reddick, Wilburn E.; Glass, John O.; Mulhern, Raymond K.
2001-07-01
Our research builds on the hypothesis that white matter damage resulting from therapy spans a continuum of severity that can be reliably probed using non-invasive MR technology. This project focuses on children treated for ALL with a regimen containing seven courses of high-dose methotrexate (HDMTX) which is known to cause leukoencephalopathy. Axial FLAIR, T1-, T2-, and PD-weighted images were acquired, registered and then analyzed with a hybrid neural network segmentation algorithm to identify normal brain parenchyma and leukoencephalopathy. Quantitative T1 and T2 maps were also analyzed at the level of the basal ganglia and the centrum semiovale. The segmented images were used as mask to identify regions of normal appearing white matter (NAWM) and leukoencephalopathy in the quantitative T1 and T2 maps. We assessed the longitudinal changes in volume, T1 and T2 in NAWM and leukoencephalopathy for 42 patients. The segmentation analysis revealed that 69% of patients had leukoencephalopathy after receiving seven courses of HDMTX. The leukoencephalopathy affected approximately 17% of the patients' white matter volume on average (range 2% - 38%). Relaxation rates in the NAWM were not significantly changed between the 1st and 7th courses. Regions of leukoencephalopathy exhibited a 13% elevation in T1 and a 37% elevation in T2 relaxation rates.
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
Texture analysis improves level set segmentation of the anterior abdominal wall
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhoubing; Allen, Wade M.; Baucom, Rebeccah B.
2013-12-15
Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore,more » to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region.« less
NASA Astrophysics Data System (ADS)
Chu, Yong; Chen, Ya-Fang; Su, Min-Ying; Nalcioglu, Orhan
2005-04-01
Image segmentation is an essential process for quantitative analysis. Segmentation of brain tissues in magnetic resonance (MR) images is very important for understanding the structural-functional relationship for various pathological conditions, such as dementia vs. normal brain aging. Different brain regions are responsible for certain functions and may have specific implication for diagnosis. Segmentation may facilitate the analysis of different brain regions to aid in early diagnosis. Region competition has been recently proposed as an effective method for image segmentation by minimizing a generalized Bayes/MDL criterion. However, it is sensitive to initial conditions - the "seeds", therefore an optimal choice of "seeds" is necessary for accurate segmentation. In this paper, we present a new skeleton-based region competition algorithm for automated gray and white matter segmentation. Skeletons can be considered as good "seed regions" since they provide the morphological a priori information, thus guarantee a correct initial condition. Intensity gradient information is also added to the global energy function to achieve a precise boundary localization. This algorithm was applied to perform gray and white matter segmentation using simulated MRI images from a realistic digital brain phantom. Nine different brain regions were manually outlined for evaluation of the performance in these separate regions. The results were compared to the gold-standard measure to calculate the true positive and true negative percentages. In general, this method worked well with a 96% accuracy, although the performance varied in different regions. We conclude that the skeleton-based region competition is an effective method for gray and white matter segmentation.
Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.
2015-01-01
Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.
Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J
2015-04-01
A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.
NASA Astrophysics Data System (ADS)
Wahi-Anwar, M. Wasil; Emaminejad, Nastaran; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.
2018-02-01
Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method - to avoid reader-influenced inconsistencies - to explore the effects of varied dose levels and reconstruction parameters on segmentation. Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions. Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.
Quantitative assessment in thermal image segmentation for artistic objects
NASA Astrophysics Data System (ADS)
Yousefi, Bardia; Sfarra, Stefano; Maldague, Xavier P. V.
2017-07-01
The application of the thermal and infrared technology in different areas of research is considerably increasing. These applications involve Non-destructive Testing (NDT), Medical analysis (Computer Aid Diagnosis/Detection- CAD), Arts and Archaeology among many others. In the arts and archaeology field, infrared technology provides significant contributions in term of finding defects of possible impaired regions. This has been done through a wide range of different thermographic experiments and infrared methods. The proposed approach here focuses on application of some known factor analysis methods such as standard Non-Negative Matrix Factorization (NMF) optimized by gradient-descent-based multiplicative rules (SNMF1) and standard NMF optimized by Non-negative least squares (NNLS) active-set algorithm (SNMF2) and eigen decomposition approaches such as Principal Component Thermography (PCT), Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT) to obtain the thermal features. On one hand, these methods are usually applied as preprocessing before clustering for the purpose of segmentation of possible defects. On the other hand, a wavelet based data fusion combines the data of each method with PCT to increase the accuracy of the algorithm. The quantitative assessment of these approaches indicates considerable segmentation along with the reasonable computational complexity. It shows the promising performance and demonstrated a confirmation for the outlined properties. In particular, a polychromatic wooden statue and a fresco were analyzed using the above mentioned methods and interesting results were obtained.
Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren
2015-12-01
To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
Herrera, Victoria L.; Pasion, Khristine A.; Tan, Glaiza A.; Ruiz-Opazo, Nelson
2013-01-01
A quantitative trait locus (QTL) linked with ability to find a platform in the Morris Water Maze (MWM) was located on chromosome 17 (Nav-5 QTL) using intercross between Dahl S and Dahl R rats. We developed two congenic strains, S.R17A and S.R17B introgressing Dahl R-chromosome 17 segments into Dahl S chromosome 17 region spanning putative Nav-5 QTL. Performance analysis of S.R17A, S.R17B and Dahl S rats in the Morris water maze (MWM) task showed a significantly decreased spatial navigation performance in S.R17B congenic rats when compared with Dahl S controls (P = 0.02). The S.R17A congenic segment did not affect MWM performance delimiting Nav-5 to the chromosome 17 65.02–74.66 Mbp region. Additional fine mapping is necessary to identify the specific gene variant accounting for Nav-5 effect on spatial learning and memory in Dahl rats. PMID:23469157
Automatic 3D segmentation of multiphoton images: a key step for the quantification of human skin.
Decencière, Etienne; Tancrède-Bohin, Emmanuelle; Dokládal, Petr; Koudoro, Serge; Pena, Ana-Maria; Baldeweck, Thérèse
2013-05-01
Multiphoton microscopy has emerged in the past decade as a useful noninvasive imaging technique for in vivo human skin characterization. However, it has not been used until now in evaluation clinical trials, mainly because of the lack of specific image processing tools that would allow the investigator to extract pertinent quantitative three-dimensional (3D) information from the different skin components. We propose a 3D automatic segmentation method of multiphoton images which is a key step for epidermis and dermis quantification. This method, based on the morphological watershed and graph cuts algorithms, takes into account the real shape of the skin surface and of the dermal-epidermal junction, and allows separating in 3D the epidermis and the superficial dermis. The automatic segmentation method and the associated quantitative measurements have been developed and validated on a clinical database designed for aging characterization. The segmentation achieves its goals for epidermis-dermis separation and allows quantitative measurements inside the different skin compartments with sufficient relevance. This study shows that multiphoton microscopy associated with specific image processing tools provides access to new quantitative measurements on the various skin components. The proposed 3D automatic segmentation method will contribute to build a powerful tool for characterizing human skin condition. To our knowledge, this is the first 3D approach to the segmentation and quantification of these original images. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.
3D Image Analysis of Geomaterials using Confocal Microscopy
NASA Astrophysics Data System (ADS)
Mulukutla, G.; Proussevitch, A.; Sahagian, D.
2009-05-01
Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.
Segmentation of the Knee for Analysis of Osteoarthritis
NASA Astrophysics Data System (ADS)
Zerfass, Peter; Museyko, Oleg; Bousson, Valérie; Laredo, Jean-Denis; Kalender, Willi A.; Engelke, Klaus
Osteoarthritis changes the load distribution within joints and also changes bone density and structure. Within typical timelines of clinical studies these changes can be very small. Therefore precise definition of evaluation regions which are highly robust and show little to no interand intra-operator variance are essential for high quality quantitative analysis. To achieve this goal we have developed a system for the definition of such regions with minimal user input.
Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.
Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R
2012-06-01
The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.
The application of high-speed cinematography for the quantitative analysis of equine locomotion.
Fredricson, I; Drevemo, S; Dalin, G; Hjertën, G; Björne, K
1980-04-01
Locomotive disorders constitute a serious problem in horse racing which will only be rectified by a better understanding of the causative factors associated with disturbances of gait. This study describes a system for the quantitative analysis of the locomotion of horses at speed. The method is based on high-speed cinematography with a semi-automatic system of analysis of the films. The recordings are made with a 16 mm high-speed camera run at 500 frames per second (fps) and the films are analysed by special film-reading equipment and a mini-computer. The time and linear gait variables are presented in tabular form and the angles and trajectories of the joints and body segments are presented graphically.
Quantitative Analysis of Geometry and Lateral Symmetry of Proximal Middle Cerebral Artery.
Peter, Roman; Emmer, Bart J; van Es, Adriaan C G M; van Walsum, Theo
2017-10-01
The purpose of our work is to quantitatively assess clinically relevant geometric properties of proximal middle cerebral arteries (pMCA), to investigate the degree of their lateral symmetry, and to evaluate whether the pMCA can be modeled by using state-of-the-art deformable image registration of the ipsi- and contralateral hemispheres. Individual pMCA segments were identified, quantified, and statistically evaluated on a set of 55 publicly available magnetic resonance angiography time-of-flight images. Rigid and deformable image registrations were used for geometric alignment of the ipsi- and contralateral hemispheres. Lateral symmetry of relevant geometric properties was evaluated before and after the image registration. No significant lateral differences regarding tortuosity and diameters of contralateral M1 segments of pMCA were identified. Regarding the length of M1 segment, 44% of all subjects could be considered laterally symmetrical. Dominant M2 segment was identified in 30% of men and 9% of women in both brain hemispheres. Deformable image registration performed significantly better (P < .01) than rigid registration with regard to distances between the ipsi- and the contralateral centerlines of M1 segments (1.5 ± 1.1 mm versus 2.8 ± 1.2 mm respectively) and between the M1 and the anterior cerebral artery (ACA) branching points (1.6 ± 1.4 mm after deformable registration). Although natural lateral variation of the length of M1 may not allow for sufficient modeling of the complete pMCA, deformable image registration of the contralateral brain hemisphere to the ipsilateral hemisphere is feasible for localization of ACA-M1 branching point and for modeling 71 ± 23% of M1 segment. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Wu, Yanyan; Wu, Zhifeng
2018-04-01
Urban expansion and land cover change driven primarily by human activities have significant influences on the urban eco-environment, and together with climate change jointly alter net primary productivity (NPP). However, at the spatiotemporal scale, there has been limited quantitative analysis of the impacts of human activities independent of climate change on NPP. We chose Guangzhou city as a study area to analyze the impacts of human activities on NPP, as well as the spatiotemporal variations of those impacts within three segments, using a relative impact index (RII) based on potential NPP (NPP p ), actual NPP (NPP act ), and NPP appropriation due to land use/land cover change (NPP lulc ). The spatial patterns and dynamics of NPP act and NPP lulc were evaluated and the impacts of human activities on NPP during the process of urban sprawl were quantitatively analyzed and assessed using the RII. The results showed that NPP act and NPP lulc in the study area had clear spatial heterogeneity, between 2001 and 2013 there was a declining trend in NPP act while an increasing trend occurred in NPP lulc , and those trends were especially significant in the 10-40-km segment. The results also revealed that more than 91.0% of pixels in whole study region had positive RII values, while the lowest average RII values were found in the > 40-km segment (39.03%), indicating that human activities were not the main cause for the change in NPP there; meanwhile, the average RII was greater than 65.0% in the other two, suggesting that they were subjected to severe anthropogenic disturbances. The RII values in all three segments of the study area increased, indicating an increasing human interference. The 10-40-km buffer zone had the largest slope value (0.5665), suggesting that this segment was closely associated with growing human disturbances. Particularly noteworthy is the fact that the > 40-km segment had a large slope value (0.3323) and required more conservation efforts. Based on the above results, we suggest that continuous efforts may be necessary to improve the intensity of protection and management in the urban environment of Guangzhou.
Xu, Yupeng; Yan, Ke; Kim, Jinman; Wang, Xiuying; Li, Changyang; Su, Li; Yu, Suqin; Xu, Xun; Feng, Dagan David
2017-01-01
Worldwide, polypoidal choroidal vasculopathy (PCV) is a common vision-threatening exudative maculopathy, and pigment epithelium detachment (PED) is an important clinical characteristic. Thus, precise and efficient PED segmentation is necessary for PCV clinical diagnosis and treatment. We propose a dual-stage learning framework via deep neural networks (DNN) for automated PED segmentation in PCV patients to avoid issues associated with manual PED segmentation (subjectivity, manual segmentation errors, and high time consumption).The optical coherence tomography scans of fifty patients were quantitatively evaluated with different algorithms and clinicians. Dual-stage DNN outperformed existing PED segmentation methods for all segmentation accuracy parameters, including true positive volume fraction (85.74 ± 8.69%), dice similarity coefficient (85.69 ± 8.08%), positive predictive value (86.02 ± 8.99%) and false positive volume fraction (0.38 ± 0.18%). Dual-stage DNN achieves accurate PED quantitative information, works with multiple types of PEDs and agrees well with manual delineation, suggesting that it is a potential automated assistant for PCV management. PMID:28966847
Xu, Yupeng; Yan, Ke; Kim, Jinman; Wang, Xiuying; Li, Changyang; Su, Li; Yu, Suqin; Xu, Xun; Feng, Dagan David
2017-09-01
Worldwide, polypoidal choroidal vasculopathy (PCV) is a common vision-threatening exudative maculopathy, and pigment epithelium detachment (PED) is an important clinical characteristic. Thus, precise and efficient PED segmentation is necessary for PCV clinical diagnosis and treatment. We propose a dual-stage learning framework via deep neural networks (DNN) for automated PED segmentation in PCV patients to avoid issues associated with manual PED segmentation (subjectivity, manual segmentation errors, and high time consumption).The optical coherence tomography scans of fifty patients were quantitatively evaluated with different algorithms and clinicians. Dual-stage DNN outperformed existing PED segmentation methods for all segmentation accuracy parameters, including true positive volume fraction (85.74 ± 8.69%), dice similarity coefficient (85.69 ± 8.08%), positive predictive value (86.02 ± 8.99%) and false positive volume fraction (0.38 ± 0.18%). Dual-stage DNN achieves accurate PED quantitative information, works with multiple types of PEDs and agrees well with manual delineation, suggesting that it is a potential automated assistant for PCV management.
A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images
Tang, Yunwei; Jing, Linhai; Ding, Haifeng
2017-01-01
The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beck, Markus H.; Inman, Ross B.; Strand, Michael R.
2007-03-01
Polydnaviruses (PDVs) are distinguished by their unique association with parasitoid wasps and their segmented, double-stranded (ds) DNA genomes that are non-equimolar in abundance. Relatively little is actually known, however, about genome packaging or segment abundance of these viruses. Here, we conducted electron microscopy (EM) and real-time polymerase chain reaction (PCR) studies to characterize packaging and segment abundance of Microplitis demolitor bracovirus (MdBV). Like other PDVs, MdBV replicates in the ovaries of females where virions accumulate to form a suspension called calyx fluid. Wasps then inject a quantity of calyx fluid when ovipositing into hosts. The MdBV genome consists of 15more » segments that range from 3.6 (segment A) to 34.3 kb (segment O). EM analysis indicated that MdBV virions contain a single nucleocapsid that encapsidates one circular DNA of variable size. We developed a semi-quantitative real-time PCR assay using SYBR Green I. This assay indicated that five (J, O, H, N and B) segments of the MdBV genome accounted for more than 60% of the viral DNAs in calyx fluid. Estimates of relative segment abundance using our real-time PCR assay were also very similar to DNA size distributions determined from micrographs. Analysis of parasitized Pseudoplusia includens larvae indicated that copy number of MdBV segments C, B and J varied between hosts but their relative abundance within a host was virtually identical to their abundance in calyx fluid. Among-tissue assays indicated that each viral segment was most abundant in hemocytes and least abundant in salivary glands. However, the relative abundance of each segment to one another was similar in all tissues. We also found no clear relationship between MdBV segment and transcript abundance in hemocytes and fat body.« less
A specialized plug-in software module for computer-aided quantitative measurement of medical images.
Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H
2003-12-01
This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.
Stent-induced coronary artery stenosis characterized by multimodal nonlinear optical microscopy
NASA Astrophysics Data System (ADS)
Wang, Han-Wei; Simianu, Vlad; Locker, Mattew J.; Cheng, Ji-Xin; Sturek, Michael
2011-02-01
We demonstrate for the first time the applicability of multimodal nonlinear optical (NLO) microscopy to the interrogation of stented coronary arteries under different diet and stent deployment conditions. Bare metal stents and Taxus drug-eluting stents (DES) were placed in coronary arteries of Ossabaw pigs of control and atherogenic diet groups. Multimodal NLO imaging was performed to inspect changes in arterial structures and compositions after stenting. Sum frequency generation, one of the multimodalities, was used for the quantitative analysis of collagen content in the peristent and in-stent artery segments of both pig groups. Atherogenic diet increased lipid and collagen in peristent segments. In-stent segments showed decreased collagen expression in neointima compared to media. Deployment of DES in atheromatous arteries inhibited collagen expression in the arterial media.
Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George
2017-06-26
We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
On the importance of FIB-SEM specific segmentation algorithms for porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salzer, Martin, E-mail: martin.salzer@uni-ulm.de; Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de; Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de
2014-09-15
A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin,more » is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.« less
Tălu, Stefan
2013-07-01
The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.
Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.
2015-01-01
Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634
Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying
2011-01-01
Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2007-03-01
The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.
Russell, Richard A; Adams, Niall M; Stephens, David A; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S
2009-04-22
Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments.
Russell, Richard A.; Adams, Niall M.; Stephens, David A.; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S.
2009-01-01
Abstract Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments. PMID:19383481
Rogers, Ian S.; Cury, Ricardo C.; Blankstein, Ron; Shapiro, Michael D.; Nieman, Koen; Hoffmann, Udo; Brady, Thomas J.; Abbara, Suhny
2010-01-01
Background Despite rapid advances in cardiac computed tomography (CT), a strategy for optimal visualization of perfusion abnormalities on CT has yet to be validated. Objective To evaluate the performance of several post-processing techniques of source data sets to detect and characterize perfusion defects in acute myocardial infarctions with cardiac CT. Methods Twenty-one subjects (18 men; 60 ± 13 years) that were successfully treated with percutaneous coronary intervention for ST-segment myocardial infarction underwent 64-slice cardiac CT and 1.5 Tesla cardiac MRI scans following revascularization. Delayed enhancement MRI images were analyzed to identify the location of infarcted myocardium. Contiguous short axis images of the left ventricular myocardium were created from the CT source images using 0.75mm multiplanar reconstruction (MPR), 5mm MPR, 5mm maximal intensity projection (MIP), and 5mm minimum intensity projection (MinIP) techniques. Segments already confirmed to contain infarction by MRI were then evaluated qualitatively and quantitatively with CT. Results Overall, 143 myocardial segments were analyzed. On qualitative analysis, the MinIP and thick MPR techniques had greater visibility and definition than the thin MPR and MIP techniques (p < 0.001). On quantitative analysis, the absolute difference in Hounsfield Unit (HU) attenuation between normal and infarcted segments was significantly greater for the MinIP (65.4 HU) and thin MPR (61.2 HU) techniques. However, the relative difference in HU attenuation was significantly greatest for the MinIP technique alone (95%, p < 0.001). Contrast to noise was greatest for the MinIP (4.2) and thick MPR (4.1) techniques (p < 0.001). Conclusion The results of our current investigation found that MinIP and thick MPR detected infarcted myocardium with greater visibility and definition than MIP and thin MPR. PMID:20579617
Spot detection and image segmentation in DNA microarray data.
Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune
2005-01-01
Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
The border-to-border distribution method for analysis of cytoplasmic particles and organelles.
Yacovone, Shalane K; Ornelles, David A; Lyles, Douglas S
2016-02-01
Comparing the distribution of cytoplasmic particles and organelles between different experimental conditions can be challenging due to the heterogeneous nature of cell morphologies. The border-to-border distribution method was created to enable the quantitative analysis of fluorescently labeled cytoplasmic particles and organelles of multiple cells from images obtained by confocal microscopy. The method consists of four steps: (1) imaging of fluorescently labeled cells, (2) division of the image of the cytoplasm into radial segments, (3) selection of segments of interest, and (4) population analysis of fluorescence intensities at the pixel level either as a function of distance along the selected radial segments or as a function of angle around an annulus. The method was validated using the well-characterized effect of brefeldin A (BFA) on the distribution of the vesicular stomatitis virus G protein, in which intensely labeled Golgi membranes are redistributed within the cytoplasm. Surprisingly, in untreated cells, the distribution of fluorescence in Golgi membrane-containing radial segments was similar to the distribution of fluorescence in other G protein-containing segments, indicating that the presence of Golgi membranes did not shift the distribution of G protein towards the nucleus compared to the distribution of G protein in other regions of the cell. Treatment with BFA caused only a slight shift in the distribution of the brightest G protein-containing segments which had a distribution similar to that in untreated cells. Instead, the major effect of BFA was to alter the annular distribution of G protein in the perinuclear region.
Novel methods for parameter-based analysis of myocardial tissue in MR images
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.
2007-03-01
The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.
Bae, Kyungsoo; Jeon, Kyung Nyeo; Lee, Seung Jun; Kim, Ho Cheol; Ha, Ji Young; Park, Sung Eun; Baek, Hye Jin; Choi, Bo Hwa; Cho, Soo Buem; Moon, Jin Il
2016-11-01
The aim of this study was to determine the relationship between lobar severity of emphysema and lung cancer using automated lobe segmentation and emphysema quantification methods.This study included 78 patients (74 males and 4 females; mean age of 72 years) with the following conditions: pathologically proven lung cancer, available chest computed tomographic (CT) scans for lobe segmentation, and quantitative scoring of emphysema. The relationship between emphysema and lung cancer was analyzed using quantitative emphysema scoring of each pulmonary lobe.The most common location of cancer was the left upper lobe (LUL) (n = 28), followed by the right upper lobe (RUL) (n = 27), left lower lobe (LLL) (n = 13), right lower lobe (RLL) (n = 9), and right middle lobe (RML) (n = 1). Emphysema ratio was the highest in LUL, followed by that in RUL, LLL, RML, and RLL. Multivariate logistic regression analysis revealed that upper lobes (odds ratio: 1.77; 95% confidence interval: 1.01-3.11, P = 0.048) and lobes with emphysema ratio ranked the 1st or the 2nd (odds ratio: 2.48; 95% confidence interval: 1.48-4.15, P < 0.001) were significantly and independently associated with lung cancer development.In emphysema patients, lung cancer has a tendency to develop in lobes with more severe emphysema.
Severity of pulmonary emphysema and lung cancer: analysis using quantitative lobar emphysema scoring
Bae, Kyungsoo; Jeon, Kyung Nyeo; Lee, Seung Jun; Kim, Ho Cheol; Ha, Ji Young; Park, Sung Eun; Baek, Hye Jin; Choi, Bo Hwa; Cho, Soo Buem; Moon, Jin Il
2016-01-01
Abstract The aim of this study was to determine the relationship between lobar severity of emphysema and lung cancer using automated lobe segmentation and emphysema quantification methods. This study included 78 patients (74 males and 4 females; mean age of 72 years) with the following conditions: pathologically proven lung cancer, available chest computed tomographic (CT) scans for lobe segmentation, and quantitative scoring of emphysema. The relationship between emphysema and lung cancer was analyzed using quantitative emphysema scoring of each pulmonary lobe. The most common location of cancer was the left upper lobe (LUL) (n = 28), followed by the right upper lobe (RUL) (n = 27), left lower lobe (LLL) (n = 13), right lower lobe (RLL) (n = 9), and right middle lobe (RML) (n = 1). Emphysema ratio was the highest in LUL, followed by that in RUL, LLL, RML, and RLL. Multivariate logistic regression analysis revealed that upper lobes (odds ratio: 1.77; 95% confidence interval: 1.01–3.11, P = 0.048) and lobes with emphysema ratio ranked the 1st or the 2nd (odds ratio: 2.48; 95% confidence interval: 1.48–4.15, P < 0.001) were significantly and independently associated with lung cancer development. In emphysema patients, lung cancer has a tendency to develop in lobes with more severe emphysema. PMID:27902611
Bjornsson, Christopher S; Lin, Gang; Al-Kofahi, Yousef; Narayanaswamy, Arunachalam; Smith, Karen L; Shain, William; Roysam, Badrinath
2009-01-01
Brain structural complexity has confounded prior efforts to extract quantitative image-based measurements. We present a systematic ‘divide and conquer’ methodology for analyzing three-dimensional (3D) multi-parameter images of brain tissue to delineate and classify key structures, and compute quantitative associations among them. To demonstrate the method, thick (~100 μm) slices of rat brain tissue were labeled using 3 – 5 fluorescent signals, and imaged using spectral confocal microscopy and unmixing algorithms. Automated 3D segmentation and tracing algorithms were used to delineate cell nuclei, vasculature, and cell processes. From these segmentations, a set of 23 intrinsic and 8 associative image-based measurements was computed for each cell. These features were used to classify astrocytes, microglia, neurons, and endothelial cells. Associations among cells and between cells and vasculature were computed and represented as graphical networks to enable further analysis. The automated results were validated using a graphical interface that permits investigator inspection and corrective editing of each cell in 3D. Nuclear counting accuracy was >89%, and cell classification accuracy ranged from 81–92% depending on cell type. We present a software system named FARSIGHT implementing our methodology. Its output is a detailed XML file containing measurements that may be used for diverse quantitative hypothesis-driven and exploratory studies of the central nervous system. PMID:18294697
NASA Astrophysics Data System (ADS)
Wörz, Stefan; Hoegen, Philipp; Liao, Wei; Müller-Eschner, Matthias; Kauczor, Hans-Ulrich; von Tengg-Kobligk, Hendrik; Rohr, Karl
2016-03-01
We introduce a framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms. Phantoms are designed using a CAD system and created with a 3D printer, and comprise realistic shapes including branches and pathologies such as abdominal aortic aneurysms (AAA). To transfer ground truth information to the 3D image coordinate system, we use a landmark-based registration scheme utilizing fiducial markers integrated in the phantom design. For accurate 3D localization of the markers we developed a novel 3D parametric intensity model that is directly fitted to the markers in the images. We also performed a quantitative evaluation of different vessel segmentation approaches for a phantom of an AAA.
An Approach for Reducing the Error Rate in Automated Lung Segmentation
Gill, Gurman; Beichel, Reinhard R.
2016-01-01
Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897
Zhang, Zhen; Xia, Shumin; Kanchanawong, Pakorn
2017-05-22
The stress fibers are prominent organization of actin filaments that perform important functions in cellular processes such as migration, polarization, and traction force generation, and whose collective organization reflects the physiological and mechanical activities of the cells. Easily visualized by fluorescence microscopy, the stress fibers are widely used as qualitative descriptors of cell phenotypes. However, due to the complexity of the stress fibers and the presence of other actin-containing cellular features, images of stress fibers are relatively challenging to quantitatively analyze using previously developed approaches, requiring significant user intervention. This poses a challenge for the automation of their detection, segmentation, and quantitative analysis. Here we describe an open-source software package, SFEX (Stress Fiber Extractor), which is geared for efficient enhancement, segmentation, and analysis of actin stress fibers in adherent tissue culture cells. Our method made use of a carefully chosen image filtering technique to enhance filamentous structures, effectively facilitating the detection and segmentation of stress fibers by binary thresholding. We subdivided the skeletons of stress fiber traces into piecewise-linear fragments, and used a set of geometric criteria to reconstruct the stress fiber networks by pairing appropriate fiber fragments. Our strategy enables the trajectory of a majority of stress fibers within the cells to be comprehensively extracted. We also present a method for quantifying the dimensions of the stress fibers using an image gradient-based approach. We determine the optimal parameter space using sensitivity analysis, and demonstrate the utility of our approach by analyzing actin stress fibers in cells cultured on various micropattern substrates. We present an open-source graphically-interfaced computational tool for the extraction and quantification of stress fibers in adherent cells with minimal user input. This facilitates the automated extraction of actin stress fibers from fluorescence images. We highlight their potential uses by analyzing images of cells with shapes constrained by fibronectin micropatterns. The method we reported here could serve as the first step in the detection and characterization of the spatial properties of actin stress fibers to enable further detailed morphological analysis.
An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang
2017-03-01
We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Automated tumor volumetry using computer-aided image segmentation.
Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos
2015-05-01
Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Automated Tumor Volumetry Using Computer-Aided Image Segmentation
Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos
2015-01-01
Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
Three-dimensional segmentation of the tumor mass in computed tomographic images of neuroblastoma
NASA Astrophysics Data System (ADS)
Deglint, Hanford J.; Rangayyan, Rangaraj M.; Boag, Graham S.
2004-05-01
Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, fibrosis, and normal tissue are often intermixed. Rather than attempt to separate these tissue types into distinct regions, we propose to explore methods to delineate the normal structures expected in abdominal CT images, remove them from further consideration, and examine the remaining parts of the images for the tumor mass. We explore the use of fuzzy connectivity for this purpose. Expert knowledge provided by the radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are also incorporated in the segmentation algorithm. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to chemotherapy and in the planning of delayed surgery for resection of the tumor. The performance of the algorithm is evaluated using cases acquired from the Alberta Children's Hospital.
NASA Astrophysics Data System (ADS)
Choi, Yong-Seok; Cho, Jae-Hwan; Namgung, Jang-Sun; Kim, Hyo-Jin; Yoon, Dae-Young; Lee, Han-Joo
2013-05-01
This study performed a comparative analysis of cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and mean time-to-peak (TTP) obtained by changing the region of interest's (ROI) anatomical positions, during CT brain perfusion. We acquired axial source images of perfusion CT from 20 patients undergoing CT perfusion exams due to brain trauma. Subsequently, the CBV, CBF, MTT, and TTP values were calculated through data-processing of the perfusion CT images. The color scales for the CBV, CBF, MTT, and TTP maps were obtained using the image data. Anterior cerebral artery (ACA) was taken as the standard ROI for the calculations of the perfusion values. Differences in the hemodynamic average values were compared in a quantitative analysis by placing ROI and the dividing axial images into proximal, middle, and distal segments anatomically. By performing the qualitative analysis using a blind test, we observed changes in the sensory characteristics by using the color scales of the CBV, CBF, and MTT maps in the proximal, middle, and distal segments. According to the qualitative analysis, no differences were found in CBV, CBF, MTT, and TTP values of the proximal, middle, and distal segments and no changes were detected in the color scales of the the CBV, CBF, MTT, and TTP maps in the proximal, middle, and distal segments. We anticipate that the results of the study will useful in assessing brain trauma patients using by perfusion imaging.
Pouch, Alison M.; Tian, Sijie; Takabe, Manabu; Wang, Hongzhi; Yuan, Jiefu; Cheung, Albert T.; Jackson, Benjamin M.; Gorman, Joseph H.; Gorman, Robert C.; Yushkevich, Paul A.
2015-01-01
3D echocardiographic (3DE) imaging is a useful tool for assessing the complex geometry of the aortic valve apparatus. Segmentation of this structure in 3DE images is a challenging task that benefits from shape-guided deformable modeling methods, which enable inter-subject statistical shape comparison. Prior work demonstrates the efficacy of using continuous medial representation (cm-rep) as a shape descriptor for valve leaflets. However, its application to the entire aortic valve apparatus is limited since the structure has a branching medial geometry that cannot be explicitly parameterized in the original cm-rep framework. In this work, we show that the aortic valve apparatus can be accurately segmented using a new branching medial modeling paradigm. The segmentation method achieves a mean boundary displacement of 0.6 ± 0.1 mm (approximately one voxel) relative to manual segmentation on 11 3DE images of normal open aortic valves. This study demonstrates a promising approach for quantitative 3DE analysis of aortic valve morphology. PMID:26247062
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2016-06-01
Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.
A JOINT FRAMEWORK FOR 4D SEGMENTATION AND ESTIMATION OF SMOOTH TEMPORAL APPEARANCE CHANGES.
Gao, Yang; Prastawa, Marcel; Styner, Martin; Piven, Joseph; Gerig, Guido
2014-04-01
Medical imaging studies increasingly use longitudinal images of individual subjects in order to follow-up changes due to development, degeneration, disease progression or efficacy of therapeutic intervention. Repeated image data of individuals are highly correlated, and the strong causality of information over time lead to the development of procedures for joint segmentation of the series of scans, called 4D segmentation. A main aim was improved consistency of quantitative analysis, most often solved via patient-specific atlases. Challenging open problems are contrast changes and occurance of subclasses within tissue as observed in multimodal MRI of infant development, neurodegeneration and disease. This paper proposes a new 4D segmentation framework that enforces continuous dynamic changes of tissue contrast patterns over time as observed in such data. Moreover, our model includes the capability to segment different contrast patterns within a specific tissue class, for example as seen in myelinated and unmyelinated white matter regions in early brain development. Proof of concept is shown with validation on synthetic image data and with 4D segmentation of longitudinal, multimodal pediatric MRI taken at 6, 12 and 24 months of age, but the methodology is generic w.r.t. different application domains using serial imaging.
NASA Astrophysics Data System (ADS)
Contrella, Benjamin; Tustison, Nicholas J.; Altes, Talissa A.; Avants, Brian B.; Mugler, John P., III; de Lange, Eduard E.
2012-03-01
Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under the same conditions from 7 to 467 days (mean +/- standard deviation: 185 +/- 37.2) later. Several techniques were evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.
Arabidopsis phenotyping through Geometric Morphometrics.
Manacorda, Carlos A; Asurmendi, Sebastian
2018-06-18
Recently, much technical progress was achieved in the field of plant phenotyping. High-throughput platforms and the development of improved algorithms for rosette image segmentation make it now possible to extract shape and size parameters for genetic, physiological and environmental studies on a large scale. The development of low-cost phenotyping platforms and freeware resources make it possible to widely expand phenotypic analysis tools for Arabidopsis. However, objective descriptors of shape parameters that could be used independently of platform and segmentation software used are still lacking and shape descriptions still rely on ad hoc or even sometimes contradictory descriptors, which could make comparisons difficult and perhaps inaccurate. Modern geometric morphometrics is a family of methods in quantitative biology proposed to be the main source of data and analytical tools in the emerging field of phenomics studies. Based on the location of landmarks (corresponding points) over imaged specimens and by combining geometry, multivariate analysis and powerful statistical techniques, these tools offer the possibility to reproducibly and accurately account for shape variations amongst groups and measure them in shape distance units. Here, a particular scheme of landmarks placement on Arabidopsis rosette images is proposed to study shape variation in the case of viral infection processes. Shape differences between controls and infected plants are quantified throughout the infectious process and visualized. Quantitative comparisons between two unrelated ssRNA+ viruses are shown and reproducibility issues are assessed. Combined with the newest automated platforms and plant segmentation procedures, geometric morphometric tools could boost phenotypic features extraction and processing in an objective, reproducible manner.
NASA Astrophysics Data System (ADS)
Vyas, N.; Sammons, R. L.; Addison, O.; Dehghani, H.; Walmsley, A. D.
2016-09-01
Biofilm accumulation on biomaterial surfaces is a major health concern and significant research efforts are directed towards producing biofilm resistant surfaces and developing biofilm removal techniques. To accurately evaluate biofilm growth and disruption on surfaces, accurate methods which give quantitative information on biofilm area are needed, as current methods are indirect and inaccurate. We demonstrate the use of machine learning algorithms to segment biofilm from scanning electron microscopy images. A case study showing disruption of biofilm from rough dental implant surfaces using cavitation bubbles from an ultrasonic scaler is used to validate the imaging and analysis protocol developed. Streptococcus mutans biofilm was disrupted from sandblasted, acid etched (SLA) Ti discs and polished Ti discs. Significant biofilm removal occurred due to cavitation from ultrasonic scaling (p < 0.001). The mean sensitivity and specificity values for segmentation of the SLA surface images were 0.80 ± 0.18 and 0.62 ± 0.20 respectively and 0.74 ± 0.13 and 0.86 ± 0.09 respectively for polished surfaces. Cavitation has potential to be used as a novel way to clean dental implants. This imaging and analysis method will be of value to other researchers and manufacturers wishing to study biofilm growth and removal.
Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie
2016-03-01
Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardisty, M.; Gordon, L.; Agarwal, P.
2007-08-15
Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less
Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.
2014-01-01
Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Chuan; Chan, H.-P.; Sahiner, Berkman
2007-12-15
The authors are developing a computerized pulmonary vessel segmentation method for a computer-aided pulmonary embolism (PE) detection system on computed tomographic pulmonary angiography (CTPA) images. Because PE only occurs inside pulmonary arteries, an automatic and accurate segmentation of the pulmonary vessels in 3D CTPA images is an essential step for the PE CAD system. To segment the pulmonary vessels within the lung, the lung regions are first extracted using expectation-maximization (EM) analysis and morphological operations. The authors developed a 3D multiscale filtering technique to enhance the pulmonary vascular structures based on the analysis of eigenvalues of the Hessian matrix atmore » multiple scales. A new response function of the filter was designed to enhance all vascular structures including the vessel bifurcations and suppress nonvessel structures such as the lymphoid tissues surrounding the vessels. An EM estimation is then used to segment the vascular structures by extracting the high response voxels at each scale. The vessel tree is finally reconstructed by integrating the segmented vessels at all scales based on a 'connected component' analysis. Two CTPA cases containing PEs were used to evaluate the performance of the system. One of these two cases also contained pleural effusion disease. Two experienced thoracic radiologists provided the gold standard of pulmonary vessels including both arteries and veins by manually tracking the arterial tree and marking the center of the vessels using a computer graphical user interface. The accuracy of vessel tree segmentation was evaluated by the percentage of the 'gold standard' vessel center points overlapping with the segmented vessels. The results show that 96.2% (2398/2494) and 96.3% (1910/1984) of the manually marked center points in the arteries overlapped with segmented vessels for the case without and with other lung diseases. For the manually marked center points in all vessels including arteries and veins, the segmentation accuracy are 97.0% (4546/4689) and 93.8% (4439/4732) for the cases without and with other lung diseases, respectively. Because of the lack of ground truth for the vessels, in addition to quantitative evaluation of the vessel segmentation performance, visual inspection was conducted to evaluate the segmentation. The results demonstrate that vessel segmentation using our method can extract the pulmonary vessels accurately and is not degraded by PE occlusion to the vessels in these test cases.« less
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139
Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.
2014-01-01
Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953
Schlaeger, Sarah; Freitag, Friedemann; Klupp, Elisabeth; Dieckmeyer, Michael; Weidlich, Dominik; Inhuber, Stephanie; Deschauer, Marcus; Schoser, Benedikt; Bublitz, Sarah; Montagnese, Federica; Zimmer, Claus; Rummeny, Ernst J; Karampinos, Dimitrios C; Kirschke, Jan S; Baum, Thomas
2018-01-01
Magnetic resonance imaging (MRI) can non-invasively assess muscle anatomy, exercise effects and pathologies with different underlying causes such as neuromuscular diseases (NMD). Quantitative MRI including fat fraction mapping using chemical shift encoding-based water-fat MRI has emerged for reliable determination of muscle volume and fat composition. The data analysis of water-fat images requires segmentation of the different muscles which has been mainly performed manually in the past and is a very time consuming process, currently limiting the clinical applicability. An automatization of the segmentation process would lead to a more time-efficient analysis. In the present work, the manually segmented thigh magnetic resonance imaging database MyoSegmenTUM is presented. It hosts water-fat MR images of both thighs of 15 healthy subjects and 4 patients with NMD with a voxel size of 3.2x2x4 mm3 with the corresponding segmentation masks for four functional muscle groups: quadriceps femoris, sartorius, gracilis, hamstrings. The database is freely accessible online at https://osf.io/svwa7/?view_only=c2c980c17b3a40fca35d088a3cdd83e2. The database is mainly meant as ground truth which can be used as training and test dataset for automatic muscle segmentation algorithms. The segmentation allows extraction of muscle cross sectional area (CSA) and volume. Proton density fat fraction (PDFF) of the defined muscle groups from the corresponding images and quadriceps muscle strength measurements/neurological muscle strength rating can be used for benchmarking purposes.
Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T
2014-01-01
Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.
Segmentation of cortical bone using fast level sets
NASA Astrophysics Data System (ADS)
Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo
2017-02-01
Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.
Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram
2016-01-01
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321
NASA Astrophysics Data System (ADS)
Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng
2015-07-01
Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.
Object Segmentation and Ground Truth in 3D Embryonic Imaging.
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
Object Segmentation and Ground Truth in 3D Embryonic Imaging
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860
Prognostic Value of Quantitative Stress Perfusion Cardiac Magnetic Resonance.
Sammut, Eva C; Villa, Adriana D M; Di Giovine, Gabriella; Dancy, Luke; Bosio, Filippo; Gibbs, Thomas; Jeyabraba, Swarna; Schwenke, Susanne; Williams, Steven E; Marber, Michael; Alfakih, Khaled; Ismail, Tevfik F; Razavi, Reza; Chiribiri, Amedeo
2018-05-01
This study sought to evaluate the prognostic usefulness of visual and quantitative perfusion cardiac magnetic resonance (CMR) ischemic burden in an unselected group of patients and to assess the validity of consensus-based ischemic burden thresholds extrapolated from nuclear studies. There are limited data on the prognostic value of assessing myocardial ischemic burden by CMR, and there are none using quantitative perfusion analysis. Patients with suspected coronary artery disease referred for adenosine-stress perfusion CMR were included (n = 395; 70% male; age 58 ± 13 years). The primary endpoint was a composite of cardiovascular death, nonfatal myocardial infarction, aborted sudden death, and revascularization after 90 days. Perfusion scans were assessed visually and with quantitative analysis. Cross-validated Cox regression analysis and net reclassification improvement were used to assess the incremental prognostic value of visual or quantitative perfusion analysis over a baseline clinical model, initially as continuous covariates, then using accepted thresholds of ≥2 segments or ≥10% myocardium. After a median 460 days (interquartile range: 190 to 869 days) follow-up, 52 patients reached the primary endpoint. At 2 years, the addition of ischemic burden was found to increase prognostic value over a baseline model of age, sex, and late gadolinium enhancement (baseline model area under the curve [AUC]: 0.75; visual AUC: 0.84; quantitative AUC: 0.85). Dichotomized quantitative ischemic burden performed better than visual assessment (net reclassification improvement 0.043 vs. 0.003 against baseline model). This study was the first to address the prognostic benefit of quantitative analysis of perfusion CMR and to support the use of consensus-based ischemic burden thresholds by perfusion CMR for prognostic evaluation of patients with suspected coronary artery disease. Quantitative analysis provided incremental prognostic value to visual assessment and established risk factors, potentially representing an important step forward in the translation of quantitative CMR perfusion analysis to the clinical setting. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Habas, Piotr A.; Kim, Kio; Chandramohan, Dharshan; Rousseau, Francois; Glenn, Orit A.; Studholme, Colin
2009-02-01
Recent advances in MR and image analysis allow for reconstruction of high-resolution 3D images from clinical in utero scans of the human fetal brain. Automated segmentation of tissue types from MR images (MRI) is a key step in the quantitative analysis of brain development. Conventional atlas-based methods for adult brain segmentation are limited in their ability to accurately delineate complex structures of developing tissues from fetal MRI. In this paper, we formulate a novel geometric representation of the fetal brain aimed at capturing the laminar structure of developing anatomy. The proposed model uses a depth-based encoding of tissue occurrence within the fetal brain and provides an additional anatomical constraint in a form of a laminar prior that can be incorporated into conventional atlas-based EM segmentation. Validation experiments are performed using clinical in utero scans of 5 fetal subjects at gestational ages ranging from 20.5 to 22.5 weeks. Experimental results are evaluated against reference manual segmentations and quantified in terms of Dice similarity coefficient (DSC). The study demonstrates that the use of laminar depth-encoded tissue priors improves both the overall accuracy and precision of fetal brain segmentation. Particular refinement is observed in regions of the parietal and occipital lobes where the DSC index is improved from 0.81 to 0.82 for cortical grey matter, from 0.71 to 0.73 for the germinal matrix, and from 0.81 to 0.87 for white matter.
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
Compatibility of segmented thermoelectric generators
NASA Technical Reports Server (NTRS)
Snyder, J.; Ursell, T.
2002-01-01
It is well known that power generation efficiency improves when materials with appropriate properties are combined either in a cascaded or segmented fashion across a temperature gradient. Past methods for determining materials used in segmentation weremainly concerned with materials that have the highest figure of merit in the temperature range. However, the example of SiGe segmented with Bi2Te3 and/or various skutterudites shows a marked decline in device efficiency even though SiGe has the highest figure of merit in the temperature range. The origin of the incompatibility of SiGe with other thermoelectric materials leads to a general definition of compatibility and intrinsic efficiency. The compatibility factor derived as = (Jl+zr - 1) a is a function of only intrinsic material properties and temperature, which is represented by a ratio of current to conduction heat. For maximum efficiency the compatibility factor should not change with temperature both within a single material, and in the segmented leg as a whole. This leads to a measure of compatibility not only between segments, but also within a segment. General temperature trends show that materials are more self compatible at higher temperatures, and segmentation is more difficult across a larger -T. The compatibility factor can be used as a quantitative guide for deciding whether a material is better suited for segmentation orcascading. Analysis of compatibility factors and intrinsic efficiency for optimal segmentation are discussed, with intent to predict optimal material properties, temperature interfaces, and/or currentheat ratios.
NASA Astrophysics Data System (ADS)
Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.
2017-02-01
Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.
Villanueva Campos, A M; Tardáguila de la Fuente, G; Utrera Pérez, E; Jurado Basildo, C; Mera Fernández, D; Martínez Rodríguez, C
To analyze whether there are significant differences in the objective quantitative parameters obtained in the postprocessing of dual-energy CT enterography studies between bowel segments with radiologic signs of Crohn's disease and radiologically normal segments. This retrospective study analyzed 33 patients (16 men and 17 women; mean age 54 years) with known Crohn's disease who underwent CT enterography on a dual-energy scanner with oral sorbitol and intravenous contrast material in the portal phase. Images obtained with dual energy were postprocessed to obtain color maps (iodine maps). For each patient, regions of interest were traced on these color maps and the density of iodine (mg/ml) and the fat fraction (%) were calculated for the wall of a pathologic bowel segment with radiologic signs of Crohn's disease and for the wall of a healthy bowel segment; the differences in these parameters between the two segments were analyzed. The density of iodine was lower in the radiologically normal segments than in the pathologic segments [1.8 ± 0.4mg/ml vs. 3.7 ± 0.9mg/ml; p<0.05]. The fat fraction was higher in the radiologically normal segments than in the pathologic segments [32.42% ± 6.5 vs. 22.23% ± 9.4; p<0.05]. There are significant differences in the iodine density and fat fraction between bowel segments with radiologic signs of Crohn's disease and radiologically normal segments. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.
2004-05-01
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.
Quantitative and Qualitative Changes in V-J α Rearrangements During Mouse Thymocytes Differentiation
Pasqual, Nicolas; Gallagher, Maighréad; Aude-Garcia, Catherine; Loiodice, Mélanie; Thuderoz, Florence; Demongeot, Jacques; Ceredig, Rod; Marche, Patrice Noël; Jouvin-Marche, Evelyne
2002-01-01
Knowledge of the complete nucleotide sequence of the mouse TCRAD locus allows an accurate determination V-J rearrangement status. Using multiplex genomic PCR assays and real time PCR analysis, we report a comprehensive and systematic analysis of the V-J recombination of TCR α chain in normal mouse thymocytes during development. These respective qualitative and quantitative approaches give rise to four major points describing the control of gene rearrangements. (a) The V-J recombination pattern is not random during ontogeny and generates a limited TCR α repertoire; (b) V-J rearrangement control is intrinsic to the thymus; (c) each V gene rearranges to a set of contiguous J segments with a gaussian-like frequency; (d) there are more rearrangements involving V genes at the 3′ side than 5′ end of V region. Taken together, this reflects a preferential association of V and J gene segments according to their respective positions in the locus, indicating that accessibility of both V and J regions is coordinately regulated, but in different ways. These results provide a new insight into TCR α repertoire size and suggest a scenario for V usage during differentiation. PMID:12417627
Suresh, Niraj; Stephens, Sean A; Adams, Lexor; Beck, Anthon N; McKinney, Adriana L; Varga, Tamas
2016-04-26
Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to climate change and crop management. Quantitative size information on roots in their native environment is invaluable for studying root growth and environmental processes involving plants. X-ray computed tomography (XCT) has been demonstrated to be an effective tool for in situ root scanning and analysis. We aimed to develop a costless and efficient tool that approximates the surface and volume of the root regardless of its shape from three-dimensional (3D) tomography data. The root structure of a Prairie dropseed (Sporobolus heterolepis) specimen was imaged using XCT. The root was reconstructed, and the primary root structure was extracted from the data using a combination of licensed and open-source software. An isosurface polygonal mesh was then created for ease of analysis. We have developed the standalone application imeshJ, generated in MATLAB(1), to calculate root volume and surface area from the mesh. The outputs of imeshJ are surface area (in mm(2)) and the volume (in mm(3)). The process, utilizing a unique combination of tools from imaging to quantitative root analysis, is described. A combination of XCT and open-source software proved to be a powerful combination to noninvasively image plant root samples, segment root data, and extract quantitative information from the 3D data. This methodology of processing 3D data should be applicable to other material/sample systems where there is connectivity between components of similar X-ray attenuation and difficulties arise with segmentation.
Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles
2017-05-26
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.
2012-01-01
Background The short inversion time inversion recovery (STIR) black-blood technique has been used to visualize myocardial edema, and thus to differentiate acute from chronic myocardial lesions. However, some cardiovascular magnetic resonance (CMR) groups have reported variable image quality, and hence the diagnostic value of STIR in routine clinical practice has been put into question. The aim of our study was to analyze image quality and diagnostic performance of STIR using a set of pulse sequence parameters dedicated to edema detection, and to discuss possible factors that influence image quality. We hypothesized that STIR imaging is an accurate and robust way of detecting myocardial edema in non-selected patients with acute myocardial infarction. Methods Forty-six consecutive patients with acute myocardial infarction underwent CMR (day 4.5, +/- 1.6) including STIR for the assessment of myocardial edema and late gadolinium enhancement (LGE) for quantification of myocardial necrosis. Thirty of these patients underwent a follow-up CMR at approximately six months (195 +/- 39 days). Both STIR and LGE images were evaluated separately on a segmental basis for image quality as well as for presence and extent of myocardial hyper-intensity, with both visual and semi-quantitative (threshold-based) analysis. LGE was used as a reference standard for localization and extent of myocardial necrosis (acute) or scar (chronic). Results Image quality of STIR images was rated as diagnostic in 99.5% of cases. At the acute stage, the sensitivity and specificity of STIR to detect infarcted segments on visual assessment was 95% and 78% respectively, and on semi-quantitative assessment was 99% and 83%, respectively. STIR differentiated acutely from chronically infarcted segments with a sensitivity of 95% by both methods and with a specificity of 99% by visual assessment and 97% by semi-quantitative assessment. The extent of hyper-intense areas on acute STIR images was 85% larger than those on LGE images, with a larger myocardial salvage index in reperfused than in non-reperfused infarcts (p = 0.035). Conclusions STIR with appropriate pulse sequence settings is accurate in detecting acute myocardial infarction (MI) and distinguishing acute from chronic MI with both visual and semi-quantitative analysis. Due to its unique technical characteristics, STIR should be regarded as an edema-weighted rather than a purely T2-weighted technique. PMID:22455461
Ahberg, Christian D.; Manz, Andreas; Neuzil, Pavel
2015-01-01
Since its invention in 1985 the polymerase chain reaction (PCR) has become a well-established method for amplification and detection of segments of double-stranded DNA. Incorporation of fluorogenic probe or DNA intercalating dyes (such as SYBR Green) into the PCR mixture allowed real-time reaction monitoring and extraction of quantitative information (qPCR). Probes with different excitation spectra enable multiplex qPCR of several DNA segments using multi-channel optical detection systems. Here we show multiplex qPCR using an economical EvaGreen-based system with single optical channel detection. Previously reported non quantitative multiplex real-time PCR techniques based on intercalating dyes were conducted once the PCR is completed by performing melting curve analysis (MCA). The technique presented in this paper is both qualitative and quantitative as it provides information about the presence of multiple DNA strands as well as the number of starting copies in the tested sample. Besides important internal control, multiplex qPCR also allows detecting concentrations of more than one DNA strand within the same sample. Detection of the avian influenza virus H7N9 by PCR is a well established method. Multiplex qPCR greatly enhances its specificity as it is capable of distinguishing both haemagglutinin (HA) and neuraminidase (NA) genes as well as their ratio. PMID:26088868
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tang, Xiaoying
2017-03-01
Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.
NASA Astrophysics Data System (ADS)
Sheppard, Adrian; Latham, Shane; Middleton, Jill; Kingston, Andrew; Myers, Glenn; Varslot, Trond; Fogden, Andrew; Sawkins, Tim; Cruikshank, Ron; Saadatfar, Mohammad; Francois, Nicolas; Arns, Christoph; Senden, Tim
2014-04-01
This paper reports on recent advances at the micro-computed tomography facility at the Australian National University. Since 2000 this facility has been a significant centre for developments in imaging hardware and associated software for image reconstruction, image analysis and image-based modelling. In 2010 a new instrument was constructed that utilises theoretically-exact image reconstruction based on helical scanning trajectories, allowing higher cone angles and thus better utilisation of the available X-ray flux. We discuss the technical hurdles that needed to be overcome to allow imaging with cone angles in excess of 60°. We also present dynamic tomography algorithms that enable the changes between one moment and the next to be reconstructed from a sparse set of projections, allowing higher speed imaging of time-varying samples. Researchers at the facility have also created a sizeable distributed-memory image analysis toolkit with capabilities ranging from tomographic image reconstruction to 3D shape characterisation. We show results from image registration and present some of the new imaging and experimental techniques that it enables. Finally, we discuss the crucial question of image segmentation and evaluate some recently proposed techniques for automated segmentation.
Burgess, Sloane; Audet, Lisa; Harjusola-Webb, Sanna
2013-01-01
The purpose of this research was to begin to characterize and compare the school and home language environments of 10 preschool-aged children with Autism Spectrum Disorders (ASD). Naturalistic language samples were collected from each child, utilizing Language ENvironment Analysis (LENA) digital voice recorder technology, at 3-month intervals over the course of one year. LENA software was used to identify 15-min segments of each sample that represented the highest number of adult words used during interactions with each child for all school and home language samples. Selected segments were transcribed and analyzed using Systematic Analysis of Language Transcripts (SALT). LENA data was utilized to evaluate quantitative characteristics of the school and home language environments and SALT data was utilized to evaluate quantitative and qualitative characteristics of language environment. Results revealed many similarities in home and school language environments including the degree of semantic richness, and complexity of adult language, types of utterances, and pragmatic functions of utterances used by adults during interactions with child participants. Study implications and recommendations for future research are discussed. The reader will be able to, (1) describe how two language sampling technologies can be utilized together to collect and analyze language samples, (2) describe characteristics of the school and home language environments of young children with ASD, and (3) identify environmental factors that may lead to more positive expressive language outcomes of young children with ASD. Copyright © 2013 Elsevier Inc. All rights reserved.
Falcon: A Temporal Visual Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.
2016-09-05
Flexible visible exploration of long, high-resolution time series from multiple sensor streams is a challenge in several domains. Falcon is a visual analytics approach that helps researchers acquire a deep understanding of patterns in log and imagery data. Falcon allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations with multiple levels of detail. These capabilities are applicable to the analysis of any quantitative time series.
Subcellular object quantification with Squassh3C and SquasshAnalyst.
Rizk, Aurélien; Mansouri, Maysam; Ballmer-Hofer, Kurt; Berger, Philipp
2015-11-01
Quantitative image analysis plays an important role in contemporary biomedical research. Squassh is a method for automatic detection, segmentation, and quantification of subcellular structures and analysis of their colocalization. Here we present the applications Squassh3C and SquasshAnalyst. Squassh3C extends the functionality of Squassh to three fluorescence channels and live-cell movie analysis. SquasshAnalyst is an interactive web interface for the analysis of Squassh3C object data. It provides segmentation image overview and data exploration, figure generation, object and image filtering, and a statistical significance test in an easy-to-use interface. The overall procedure combines the Squassh3C plug-in for the free biological image processing program ImageJ and a web application working in conjunction with the free statistical environment R, and it is compatible with Linux, MacOS X, or Microsoft Windows. Squassh3C and SquasshAnalyst are available for download at www.psi.ch/lbr/SquasshAnalystEN/SquasshAnalyst.zip.
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Frainier, Richard; Colombano, Silvano; Hazelton, Lyman; Szolovits, Peter
1993-01-01
This paper describes portions of a novel system called MARIKA (Model Analysis and Revision of Implicit Key Assumptions) to automatically revise a model of the normal human orientation system. The revision is based on analysis of discrepancies between experimental results and computer simulations. The discrepancies are calculated from qualitative analysis of quantitative simulations. The experimental and simulated time series are first discretized in time segments. Each segment is then approximated by linear combinations of simple shapes. The domain theory and knowledge are represented as a constraint network. Incompatibilities detected during constraint propagation within the network yield both parameter and structural model alterations. Interestingly, MARIKA diagnosed a data set from the Massachusetts Eye and Ear Infirmary Vestibular Laboratory as abnormal though the data was tagged as normal. Published results from other laboratories confirmed the finding. These encouraging results could lead to a useful clinical vestibular tool and to a scientific discovery system for space vestibular adaptation.
A workflow for the automatic segmentation of organelles in electron microscopy image stacks
Perez, Alex J.; Seyedhosseini, Mojtaba; Deerinck, Thomas J.; Bushong, Eric A.; Panda, Satchidananda; Tasdizen, Tolga; Ellisman, Mark H.
2014-01-01
Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime. PMID:25426032
Falcão, João L. A. A.; Falcão, Breno A. A.; Gurudevan, Swaminatha V.; Campos, Carlos M.; Silva, Expedito R.; Kalil-Filho, Roberto; Rochitte, Carlos E.; Shiozaki, Afonso A.; Coelho-Filho, Otavio R.; Lemos, Pedro A.
2015-01-01
Background The diagnostic accuracy of 64-slice MDCT in comparison with IVUS has been poorly described and is mainly restricted to reports analyzing segments with documented atherosclerotic plaques. Objectives We compared 64-slice multidetector computed tomography (MDCT) with gray scale intravascular ultrasound (IVUS) for the evaluation of coronary lumen dimensions in the context of a comprehensive analysis, including segments with absent or mild disease. Methods The 64-slice MDCT was performed within 72 h before the IVUS imaging, which was obtained for at least one coronary, regardless of the presence of luminal stenosis at angiography. A total of 21 patients were included, with 70 imaged vessels (total length 114.6 ± 38.3 mm per patient). A coronary plaque was diagnosed in segments with plaque burden > 40%. Results At patient, vessel, and segment levels, average lumen area, minimal lumen area, and minimal lumen diameter were highly correlated between IVUS and 64-slice MDCT (p < 0.01). However, 64-slice MDCT tended to underestimate the lumen size with a relatively wide dispersion of the differences. The comparison between 64-slice MDCT and IVUS lumen measurements was not substantially affected by the presence or absence of an underlying plaque. In addition, 64-slice MDCT showed good global accuracy for the detection of IVUS parameters associated with flow-limiting lesions. Conclusions In a comprehensive, multi-territory, and whole-artery analysis, the assessment of coronary lumen by 64-slice MDCT compared with coronary IVUS showed a good overall diagnostic ability, regardless of the presence or absence of underlying atherosclerotic plaques. PMID:25993595
Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging
NASA Astrophysics Data System (ADS)
Orologas, F.; Saitis, P.; Kallergi, M.
2017-11-01
Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer’s general analysis software at much lower cost. Relatively simple processing techniques could lead to customized, unsupervised or partially supervised methods that can successfully perform the desirable analysis and adapt to the specific disease requirements.
Understanding the optics to aid microscopy image segmentation.
Yin, Zhaozheng; Li, Kang; Kanade, Takeo; Chen, Mei
2010-01-01
Image segmentation is essential for many automated microscopy image analysis systems. Rather than treating microscopy images as general natural images and rushing into the image processing warehouse for solutions, we propose to study a microscope's optical properties to model its image formation process first using phase contrast microscopy as an exemplar. It turns out that the phase contrast imaging system can be relatively well explained by a linear imaging model. Using this model, we formulate a quadratic optimization function with sparseness and smoothness regularizations to restore the "authentic" phase contrast images that directly correspond to specimen's optical path length without phase contrast artifacts such as halo and shade-off. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on two sequences with thousands of cells captured over several days.
Variability of manual ciliary muscle segmentation in optical coherence tomography images.
Chang, Yu-Cherng; Liu, Keke; Cabot, Florence; Yoo, Sonia H; Ruggeri, Marco; Ho, Arthur; Parel, Jean-Marie; Manns, Fabrice
2018-02-01
Optical coherence tomography (OCT) offers new options for imaging the ciliary muscle allowing direct in vivo visualization. However, variation in image quality along the length of the muscle prevents accurate delineation and quantification of the muscle. Quantitative analyses of the muscle are accompanied by variability in segmentation between examiners and between sessions for the same examiner. In processes such as accommodation where changes in muscle thickness may be tens of microns- the equivalent of a small number of image pixels, differences in segmentation can influence the magnitude and potentially the direction of thickness change. A detailed analysis of variability in ciliary muscle thickness measurements was performed to serve as a benchmark for the extent of this variability in studies on the ciliary muscle. Variation between sessions and examiners were found to be insignificant but the magnitude of variation should be considered when interpreting ciliary muscle results.
PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN
Poeschl, Yvonne; Plötner, Romina
2017-01-01
Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626
ERIC Educational Resources Information Center
Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.
2015-01-01
Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yi-Yang; Lee, Rheun-Chuan, E-mail: rclee@vghtpe.gov.tw; Guo, Wan-Yuo, E-mail: wyguo@vghtpe.gov.tw
PurposeTo quantify the arterial flow change during transcatheter arterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) using digital subtraction angiography, quantitative color-coding analysis (d-QCA), and real-time subtraction fluoroscopy QCA (f-QCA).Materials and MethodsThis prospective study enrolled 20 consecutive patients with HCC who had undergone TACE via a subsegmental approach between February 2014 and April 2015. The TACE endpoint was a sluggish antegrade tumor-feeding arterial flow. d-QCA and f-QCA were used for determining the relative maximal density time (rT{sub max}) of the selected arteries. The rT{sub max} of the selected arteries was analyzed in d-QCA and f-QCA before and after TACE, and itsmore » correlation in both analyses was evaluated.ResultsThe pre- and post-TACE rT{sub max} of the embolized segmental artery in d-QCA and f-QCA were 1.59 ± 0.81 and 2.97 ± 1.80 s (P < 0.001) and 1.44 ± 0.52 and 2.28 ± 1.02 s (P < 0.01), respectively. The rT{sub max} of the proximal hepatic artery did not significantly change during TACE in d-QCA and f-QCA. The Spearman correlation coefficients of the pre- and post-TACE rT{sub max} of the embolized segmental artery between d-QCA and f-QCA were 0.46 (P < 0.05) and 0.80 (P < 0.001). Radiation doses in one series of d-QCA and f-QCA were 140.7 ± 51.5 milligray (mGy) and 2.5 ± 0.7 mGy, respectively.Conclusionsf-QCA can quantify arterial flow changes with a higher temporal resolution and lower radiation dose. Flow quantification of the embolized segmental artery using f-QCA and d-QCA is highly correlated.« less
Semi-Automatic Segmentation Software for Quantitative Clinical Brain Glioblastoma Evaluation
Zhu, Y; Young, G; Xue, Z; Huang, R; You, H; Setayesh, K; Hatabu, H; Cao, F; Wong, S.T.
2012-01-01
Rationale and Objectives Quantitative measurement provides essential information about disease progression and treatment response in patients with Glioblastoma multiforme (GBM). The goal of this paper is to present and validate a software pipeline for semi-automatic GBM segmentation, called AFINITI (Assisted Follow-up in NeuroImaging of Therapeutic Intervention), using clinical data from GBM patients. Materials and Methods Our software adopts the current state-of-the-art tumor segmentation algorithms and combines them into one clinically usable pipeline. Both the advantages of the traditional voxel-based and the deformable shape-based segmentation are embedded into the software pipeline. The former provides an automatic tumor segmentation scheme based on T1- and T2-weighted MR brain data, and the latter refines the segmentation results with minimal manual input. Results Twenty six clinical MR brain images of GBM patients were processed and compared with manual results. The results can be visualized using the embedded graphic user interface (GUI). Conclusion Validation results using clinical GBM data showed high correlation between the AFINITI results and manual annotation. Compared to the voxel-wise segmentation, AFINITI yielded more accurate results in segmenting the enhanced GBM from multimodality MRI data. The proposed pipeline could be used as additional information to interpret MR brain images in neuroradiology. PMID:22591720
The Structure of Segmental Errors in the Speech of Deaf Children.
ERIC Educational Resources Information Center
Levitt, H.; And Others
1980-01-01
A quantitative description of the segmental errors occurring in the speech of deaf children is developed. Journal availability: Elsevier North Holland, Inc., 52 Vanderbilt Avenue, New York, NY 10017. (Author)
van Velsen, Evert F S; Niessen, Wiro J; de Weert, Thomas T; de Monyé, Cécile; van der Lugt, Aad; Meijering, Erik; Stokking, Rik
2007-07-01
Vessel image analysis is crucial when considering therapeutical options for (cardio-) vascular diseases. Our method, VAMPIRE (Vascular Analysis using Multiscale Paths Inferred from Ridges and Edges), involves two parts: a user defines a start- and endpoint upon which a lumen path is automatically defined, and which is used for initialization; the automatic segmentation of the vessel lumen on computed tomographic angiography (CTA) images. Both parts are based on the detection of vessel-like structures by analyzing intensity, edge, and ridge information. A multi-observer evaluation study was performed to compare VAMPIRE with a conventional method on the CTA data of 15 patients with carotid artery stenosis. In addition to the start- and endpoint, the two radiologists required on average 2.5 (SD: 1.9) additional points to define a lumen path when using the conventional method, and 0.1 (SD: 0.3) when using VAMPIRE. The segmentation results were quantitatively evaluated using Similarity Indices, which were slightly lower between VAMPIRE and the two radiologists (respectively 0.90 and 0.88) compared with the Similarity Index between the radiologists (0.92). The evaluation shows that the improved definition of a lumen path requires minimal user interaction, and that using this path as initialization leads to good automatic lumen segmentation results.
Timp, Sheila; Karssemeijer, Nico
2004-05-01
Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area Az under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in Az values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant.
Almasi, Sepideh; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L; Xu, Xiaoyin
2017-03-01
To simultaneously overcome the challenges imposed by the nature of optical imaging characterized by a range of artifacts including space-varying signal to noise ratio (SNR), scattered light, and non-uniform illumination, we developed a novel method that segments the 3-D vasculature directly from original fluorescence microscopy images eliminating the need for employing pre- and post-processing steps such as noise removal and segmentation refinement as used with the majority of segmentation techniques. Our method comprises two initialization and constrained recovery and enhancement stages. The initialization approach is fully automated using features derived from bi-scale statistical measures and produces seed points robust to non-uniform illumination, low SNR, and local structural variations. This algorithm achieves the goal of segmentation via design of an iterative approach that extracts the structure through voting of feature vectors formed by distance, local intensity gradient, and median measures. Qualitative and quantitative analysis of the experimental results obtained from synthetic and real data prove the effcacy of this method in comparison to the state-of-the-art enhancing-segmenting methods. The algorithmic simplicity, freedom from having a priori probabilistic information about the noise, and structural definition gives this algorithm a wide potential range of applications where i.e. structural complexity significantly complicates the segmentation problem.
Alzheimer's disease detection using 11C-PiB with improved partial volume effect correction
NASA Astrophysics Data System (ADS)
Raniga, Parnesh; Bourgeat, Pierrick; Fripp, Jurgen; Acosta, Oscar; Ourselin, Sebastien; Rowe, Christopher; Villemagne, Victor L.; Salvado, Olivier
2009-02-01
Despite the increasing use of 11C-PiB in research into Alzheimer's disease (AD), there are few standardized analysis procedures that have been reported or published. This is especially true with regards to partial volume effects (PVE) and partial volume correction. Due to the nature of PET physics and acquisition, PET images exhibit relatively low spatial resolution compared to other modalities, resulting in bias of quantitative results. Although previous studies have applied PVE correction techniques on 11C-PiB data, the results have not been quantitatively evaluated and compared against uncorrected data. The aim of this study is threefold. Firstly, a realistic synthetic phantom was created to quantify PVE. Secondly, MRI partial volume estimate segmentations were used to improve voxel-based PVE correction instead of using hard segmentations. Thirdly, quantification of PVE correction was evaluated on 34 subjects (AD=10, Normal Controls (NC)=24), including 12 PiB positive NC. Regional analysis was performed using the Anatomical Automatic Labeling (AAL) template, which was registered to each patient. Regions of interest were restricted to the gray matter (GM) defined by the MR segmentation. Average normalized intensity of the neocortex and selected regions were used to evaluate the discrimination power between AD and NC both with and without PVE correction. Receiver Operating Characteristic (ROC) curves were computed for the binary discrimination task. The phantom study revealed signal losses due to PVE between 10 to 40 % which were mostly recovered to within 5% after correction. Better classification was achieved after PVE correction, resulting in higher areas under ROC curves.
Breast histopathology image segmentation using spatio-colour-texture based graph partition method.
Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N
2016-06-01
This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Sprengers, Andre M J; Caan, Matthan W A; Moerman, Kevin M; Nederveen, Aart J; Lamerichs, Rolf M; Stoker, Jaap
2013-04-01
This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises non-linear scale space for automatic segmentation of single-shot tagged images. The algorithm's ability to automatically segment tagged shearing motion was evaluated in a numerical simulation and in vivo. A typical shearing deformation was simulated in a Shepp-Logan phantom allowing for quantitative evaluation of the algorithm's success rate as a function of both SNR and the amount of deformation. For a qualitative in vivo evaluation tagged images showing deformations in the calf muscles and eye movement in a healthy volunteer were acquired. Both the numerical simulation and the in vivo tagged data demonstrated the algorithm's ability for automated segmentation of single-shot tagged MR provided that SNR of the images is above 10 and the amount of deformation does not exceed the tag spacing. The latter constraint can be met by adjusting the tag delay or the tag spacing. The scale space based algorithm for automatic segmentation of single-shot tagged MR enables the application of tagged MR to complex (shearing) deformation and the processing of datasets with relatively low SNR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, S; Wloch, J; Pirkola, M
Purpose: Quantitative fat-water segmentation is important not only because of the clinical utility of fat-suppressed MRI images in better detecting lesions of clinical significance (in the midst of bright fat signal) but also because of the possible physical need, in which CT-like images based on the materials’ photon attenuation properties may have to be generated from MR images; particularly, as in the case of MR-only radiation oncology environment to obtain radiation dose calculation or as in the case of hybrid PET/MR modality to obtain attenuation correction map for the quantitative PET reconstruction. The majority of such fat-water quantitative segmentations havemore » been performed by utilizing the Dixon’s method and its variations, which have to enforce the proper settings (often predefined) of echo time (TE) in the pulse sequences. Therefore, such methods have been unable to be directly combined with those ultrashort TE (UTE) sequences that, taking the advantage of very low TE values (∼ 10’s microsecond), might be beneficial to directly detect bones. Recently, an RF pulse-based method (http://dx.doi.org/10.1016/j.mri.2015.11.006), termed as PROD pulse method, was introduced as a method of quantitative fat-water segmentation that does not have to depend on predefined TE settings. Here, the clinical feasibility of this method is verified in brain tumor patients by combining the PROD pulse with several sequences. Methods: In a clinical 3T MRI, the PROD pulse was combined with turbo spin echo (e.g. TR=1500, TE=16 or 60, ETL=15) or turbo field echo (e.g. TR=5.6, TE=2.8, ETL=12) sequences without specifying TE values. Results: The fat-water segmentation was possible without having to set specific TE values. Conclusion: The PROD pulse method is clinically feasible. Although not yet combined with UTE sequences in our laboratory, the method is potentially compatible with UTE sequences, and thus, might be useful to directly segment fat, water, bone and air.« less
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Sun, Hongliu; Chan, Heang-Ping; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir; Kazerooni, Ella
2018-02-01
We are developing automated radiopathomics method for diagnosis of lung nodule subtypes. In this study, we investigated the feasibility of using quantitative methods to analyze the tumor nuclei and cytoplasm in pathologic wholeslide images for the classification of pathologic subtypes of invasive nodules and pre-invasive nodules. We developed a multiscale blob detection method with watershed transform (MBD-WT) to segment the tumor cells. Pathomic features were extracted to characterize the size, morphology, sharpness, and gray level variation in each segmented nucleus and the heterogeneity patterns of tumor nuclei and cytoplasm. With permission of the National Lung Screening Trial (NLST) project, a data set containing 90 digital haematoxylin and eosin (HE) whole-slide images from 48 cases was used in this study. The 48 cases contain 77 regions of invasive subtypes and 43 regions of pre-invasive subtypes outlined by a pathologist on the HE images using the pathological tumor region description provided by NLST as reference. A logistic regression model (LRM) was built using leave-one-case-out resampling and receiver operating characteristic (ROC) analysis for classification of invasive and pre-invasive subtypes. With 11 selected features, the LRM achieved a test area under the ROC curve (AUC) value of 0.91+/-0.03. The results demonstrated that the pathologic invasiveness of lung adenocarcinomas could be categorized with high accuracy using pathomics analysis.
Rapid analysis and exploration of fluorescence microscopy images.
Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J
2014-03-19
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
FFDM image quality assessment using computerized image texture analysis
NASA Astrophysics Data System (ADS)
Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina
2010-04-01
Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.
Continuum theory of gene expression waves during vertebrate segmentation.
Jörg, David J; Morelli, Luis G; Soroldoni, Daniele; Oates, Andrew C; Jülicher, Frank
2015-09-01
The segmentation of the vertebrate body plan during embryonic development is a rhythmic and sequential process governed by genetic oscillations. These genetic oscillations give rise to traveling waves of gene expression in the segmenting tissue. Here we present a minimal continuum theory of vertebrate segmentation that captures the key principles governing the dynamic patterns of gene expression including the effects of shortening of the oscillating tissue. We show that our theory can quantitatively account for the key features of segmentation observed in zebrafish, in particular the shape of the wave patterns, the period of segmentation and the segment length as a function of time.
Continuum theory of gene expression waves during vertebrate segmentation
Jörg, David J; Morelli, Luis G; Soroldoni, Daniele; Oates, Andrew C; Jülicher, Frank
2015-01-01
Abstract The segmentation of the vertebrate body plan during embryonic development is a rhythmic and sequential process governed by genetic oscillations. These genetic oscillations give rise to traveling waves of gene expression in the segmenting tissue. Here we present a minimal continuum theory of vertebrate segmentation that captures the key principles governing the dynamic patterns of gene expression including the effects of shortening of the oscillating tissue. We show that our theory can quantitatively account for the key features of segmentation observed in zebrafish, in particular the shape of the wave patterns, the period of segmentation and the segment length as a function of time. PMID:28725158
Kainz, Philipp; Pfeiffer, Michael; Urschler, Martin
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.
Kainz, Philipp; Pfeiffer, Michael
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses. PMID:29018612
Tonar, Zbyněk; Kubíková, Tereza; Prior, Claudia; Demjén, Erna; Liška, Václav; Králíčková, Milena; Witter, Kirsti
2015-09-01
The porcine aorta is often used in studies on morphology, pathology, transplantation surgery, vascular and endovascular surgery, and biomechanics of the large arteries. Using quantitative histology and stereology, we estimated the area fraction of elastin, collagen, alpha-smooth muscle actin, vimentin, and desmin within the tunica media in 123 tissue samples collected from five segments (thoracic ascending aorta; aortic arch; thoracic descending aorta; suprarenal abdominal aorta; and infrarenal abdominal aorta) of porcine aortae from growing domestic pigs (n=25), ranging in age from 0 to 230 days. The descending thoracic aorta had the greatest elastin fraction, which decreased proximally toward the aortic arch as well as distally toward the abdominal aorta. Abdominal aortic segments had the highest fraction of actin, desmin, and vimentin positivity and all of these vascular smooth muscle markers were lower in the thoracic aortic segments. No quantitative differences were found when comparing the suprarenal abdominal segments with the infrarenal abdominal segments. The area fraction of actin within the media was comparable in all age groups and it was proportional to the postnatal growth. Thicker aortic segments had more elastin and collagen with fewer contractile cells. The collagen fraction decreased from ascending aorta and aortic arch toward the descending aorta. By revealing the variability of the quantitative composition of the porcine aorta, the results are suitable for planning experiments with the porcine aorta as a model, i.e. power test analyses and estimating the number of samples necessary to achieving a desirable level of precision. The complete primary morphometric data, in the form of continuous variables, are made publicly available for biomechanical modeling of site-dependent distensibility and compliance of the porcine aorta. Copyright © 2015 Elsevier GmbH. All rights reserved.
Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard
2018-04-01
To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Garcia-Allende, P. Beatriz; Amygdalos, Iakovos; Dhanapala, Hiruni; Goldin, Robert D.; Hanna, George B.; Elson, Daniel S.
2012-01-01
Computer-aided diagnosis of ophthalmic diseases using optical coherence tomography (OCT) relies on the extraction of thickness and size measures from the OCT images, but such defined layers are usually not observed in emerging OCT applications aimed at "optical biopsy" such as pulmonology or gastroenterology. Mathematical methods such as Principal Component Analysis (PCA) or textural analyses including both spatial textural analysis derived from the two-dimensional discrete Fourier transform (DFT) and statistical texture analysis obtained independently from center-symmetric auto-correlation (CSAC) and spatial grey-level dependency matrices (SGLDM), as well as, quantitative measurements of the attenuation coefficient have been previously proposed to overcome this problem. We recently proposed an alternative approach consisting of a region segmentation according to the intensity variation along the vertical axis and a pure statistical technology for feature quantification. OCT images were first segmented in the axial direction in an automated manner according to intensity. Afterwards, a morphological analysis of the segmented OCT images was employed for quantifying the features that served for tissue classification. In this study, a PCA processing of the extracted features is accomplished to combine their discriminative power in a lower number of dimensions. Ready discrimination of gastrointestinal surgical specimens is attained demonstrating that the approach further surpasses the algorithms previously reported and is feasible for tissue classification in the clinical setting.
Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W
2016-11-01
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.
Preparation of Segmented Microtubules to Study Motions Driven by the Disassembling Microtubule Ends
Volkov, Vladimir A.; Zaytsev, Anatoly V.; Grishchuk, Ekaterina L.
2014-01-01
Microtubule depolymerization can provide force to transport different protein complexes and protein-coated beads in vitro. The underlying mechanisms are thought to play a vital role in the microtubule-dependent chromosome motions during cell division, but the relevant proteins and their exact roles are ill-defined. Thus, there is a growing need to develop assays with which to study such motility in vitro using purified components and defined biochemical milieu. Microtubules, however, are inherently unstable polymers; their switching between growth and shortening is stochastic and difficult to control. The protocols we describe here take advantage of the segmented microtubules that are made with the photoablatable stabilizing caps. Depolymerization of such segmented microtubules can be triggered with high temporal and spatial resolution, thereby assisting studies of motility at the disassembling microtubule ends. This technique can be used to carry out a quantitative analysis of the number of molecules in the fluorescently-labeled protein complexes, which move processively with dynamic microtubule ends. To optimize a signal-to-noise ratio in this and other quantitative fluorescent assays, coverslips should be treated to reduce nonspecific absorption of soluble fluorescently-labeled proteins. Detailed protocols are provided to take into account the unevenness of fluorescent illumination, and determine the intensity of a single fluorophore using equidistant Gaussian fit. Finally, we describe the use of segmented microtubules to study microtubule-dependent motions of the protein-coated microbeads, providing insights into the ability of different motor and nonmotor proteins to couple microtubule depolymerization to processive cargo motion. PMID:24686554
Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.; ...
2016-11-04
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less
Metzinger, Matthew N; Miramontes, Bernadette; Zhou, Peng; Liu, Yueying; Chapman, Sarah; Sun, Lucy; Sasser, Todd A; Duffield, Giles E; Stack, M Sharon; Leevy, W Matthew
2014-10-08
Numerous obesity studies have coupled murine models with non-invasive methods to quantify body composition in longitudinal experiments, including X-ray computed tomography (CT) or quantitative nuclear magnetic resonance (QMR). Both microCT and QMR have been separately validated with invasive techniques of adipose tissue quantification, like post-mortem fat extraction and measurement. Here we report a head-to-head study of both protocols using oil phantoms and mouse populations to determine the parameters that best align CT data with that from QMR. First, an in vitro analysis of oil/water mixtures was used to calibrate and assess the overall accuracy of microCT vs. QMR data. Next, experiments were conducted with two cohorts of living mice (either homogenous or heterogeneous by sex, age and genetic backgrounds) to assess the microCT imaging technique for adipose tissue segmentation and quantification relative to QMR. Adipose mass values were obtained from microCT data with three different resolutions, after which the data were analyzed with different filter and segmentation settings. Strong linearity was noted between the adipose mass values obtained with microCT and QMR, with optimal parameters and scan conditions reported herein. Lean tissue (muscle, internal organs) was also segmented and quantified using the microCT method relative to the analogous QMR values. Overall, the rigorous calibration and validation of the microCT method for murine body composition, relative to QMR, ensures its validity for segmentation, quantification and visualization of both adipose and lean tissues.
NASA Astrophysics Data System (ADS)
Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K.; Yashar, Catheryn M.; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura
2015-04-01
Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.
Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K; Yashar, Catheryn M; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura
2015-04-07
Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based 'thin-plate-spline robust point matching' algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Valen, David A.; Kudo, Takamasa; Lane, Keara M.
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domainsmore » of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.« less
Van Valen, David A.; Lane, Keara M.; Quach, Nicolas T.; Maayan, Inbal
2016-01-01
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems. PMID:27814364
NASA Astrophysics Data System (ADS)
Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian
2016-03-01
Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.
Drooger, Jan C; Jager, Agnes; Lam, Mei-Ho; den Boer, Mathilda D; Sleijfer, Stefan; Mathijssen, Ron H J; de Bruijn, Peter
2015-10-10
The aim of this study was to validate an earlier developed high-performance highly sensitive ultra performance liquid chromatography/tandem mass spectrometry (UPLC-MS/MS) method for quantification of tamoxifen and its three main metabolites (N-desmethyl-tamoxifen, 4-hydroxy-tamoxifen and 4-hydroxy-N-desmethyl-tamoxifen) in scalp hair. This non-invasive method might, by segmental analysis of hair, be useful in the determination of the concentration of drugs and its metabolites over time, which can be used to study a wide variety of clinical relevant questions. Hair samples (150-300 hair strands, cut as close to the scalp as possible from the posterior vertex region of the head) were collected from female patients taking tamoxifen 20mg daily (n=19). The analytes were extracted using a liquid-liquid extraction procedure with carbonate buffer at pH 8.8 and a mixture of n-hexane/isopropranol method, followed by UPLC-MS/MS chromatography, based on an earlier validated method. The calibration curves were linear in the range of 1.00-200 pmol for tamoxifen and N-desmethyl-tamoxifen, with lower limit of quantitation of 1.00 pmol and 0.100-20.0 pmol with lower limit of quantitation of 0.100 pmol for endoxifen and 4-hydroxy-tamoxifen. Assay performance was fair with a within-run and between-run variability less than 9.24 at the three quality control samples and less than 15.7 for the lower limit of quantitation. Importantly, a steep linear decline was observed from distal to proximal hair segments. Probably, this is due to UV exposure as we showed degradation of tamoxifen and its metabolites after exposure to UV-light. Furthermore, higher concentrations of tamoxifen were found in black hair samples compared to blond and brown hair samples. We conclude that measurement of the concentration of tamoxifen and its main metabolites in hair is possible, with the selective, sensitive, accurate and precise UPLC-MS/MS method. However, for tamoxifen, it seems not possible to determine exposure over time with segmental analysis of hair, probably largely due to the effect of UV irradiation. Further research should therefore focus on quantification of other anticancer drugs, in segmented scalp hair, that are less sensitive to UV irradiation. Copyright © 2015 Elsevier B.V. All rights reserved.
Left ventricle segmentation via graph cut distribution matching.
Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron
2009-01-01
We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.
Collaborative SDOCT Segmentation and Analysis Software.
Yun, Yeyi; Carass, Aaron; Lang, Andrew; Prince, Jerry L; Antony, Bhavna J
2017-02-01
Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.
Hierarchical image segmentation via recursive superpixel with adaptive regularity
NASA Astrophysics Data System (ADS)
Nakamura, Kensuke; Hong, Byung-Woo
2017-11-01
A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.
NASA Astrophysics Data System (ADS)
He, Youmin; Qu, Yueqiao; Zhang, Yi; Ma, Teng; Zhu, Jiang; Miao, Yusi; Humayun, Mark; Zhou, Qifa; Chen, Zhongping
2017-02-01
Age-related macular degeneration (AMD) is an eye condition that is considered to be one of the leading causes of blindness among people over 50. Recent studies suggest that the mechanical properties in retina layers are affected during the early onset of disease. Therefore, it is necessary to identify such changes in the individual layers of the retina so as to provide useful information for disease diagnosis. In this study, we propose using an acoustic radiation force optical coherence elastography (ARF-OCE) system to dynamically excite the porcine retina and detect the vibrational displacement with phase resolved Doppler optical coherence tomography. Due to the vibrational mechanism of the tissue response, the image quality is compromised during elastogram acquisition. In order to properly analyze the images, all signals, including the trigger and control signals for excitation, as well as detection and scanning signals, are synchronized within the OCE software and are kept consistent between frames, making it possible for easy phase unwrapping and elasticity analysis. In addition, a combination of segmentation algorithms is used to accommodate the compromised image quality. An automatic 3D segmentation method has been developed to isolate and measure the relative elasticity of every individual retinal layer. Two different segmentation schemes based on random walker and dynamic programming are implemented. The algorithm has been validated using a 3D region of the porcine retina, where individual layers have been isolated and analyzed using statistical methods. The errors compared to manual segmentation will be calculated.
NASA Astrophysics Data System (ADS)
Guldner, Ian H.; Yang, Lin; Cowdrick, Kyle R.; Wang, Qingfei; Alvarez Barrios, Wendy V.; Zellmer, Victoria R.; Zhang, Yizhe; Host, Misha; Liu, Fang; Chen, Danny Z.; Zhang, Siyuan
2016-04-01
Metastatic microenvironments are spatially and compositionally heterogeneous. This seemingly stochastic heterogeneity provides researchers great challenges in elucidating factors that determine metastatic outgrowth. Herein, we develop and implement an integrative platform that will enable researchers to obtain novel insights from intricate metastatic landscapes. Our two-segment platform begins with whole tissue clearing, staining, and imaging to globally delineate metastatic landscape heterogeneity with spatial and molecular resolution. The second segment of our platform applies our custom-developed SMART 3D (Spatial filtering-based background removal and Multi-chAnnel forest classifiers-based 3D ReconsTruction), a multi-faceted image analysis pipeline, permitting quantitative interrogation of functional implications of heterogeneous metastatic landscape constituents, from subcellular features to multicellular structures, within our large three-dimensional (3D) image datasets. Coupling whole tissue imaging of brain metastasis animal models with SMART 3D, we demonstrate the capability of our integrative pipeline to reveal and quantify volumetric and spatial aspects of brain metastasis landscapes, including diverse tumor morphology, heterogeneous proliferative indices, metastasis-associated astrogliosis, and vasculature spatial distribution. Collectively, our study demonstrates the utility of our novel integrative platform to reveal and quantify the global spatial and volumetric characteristics of the 3D metastatic landscape with unparalleled accuracy, opening new opportunities for unbiased investigation of novel biological phenomena in situ.
ELITE S2 - A Facility for Quantitative Human Movement Analysis on Board the ISS
NASA Astrophysics Data System (ADS)
Neri, Gianluca; Mascetti, Gabriele; Zolesi, Valfredo
2014-11-01
This paper describes the activities for utilization and control of ELITE S2 on board the International Space Station (ISS). ELITE S2 is a payload of the Italian Space Agency (ASI) for quantitative human movement analysis in weightlessness. Within the frame of a bilateral agreement with NASA, ASI has funded a number of facilities, enabling different scientific experiments on board the ISS. ELITE S2 has been developed by the ASI contractor Kayser Italia, delivered to the Kennedy Space Center in 2006 for pre-flight processing, launched in 2007 by the Space Shuttle Endeavour (STS-118), integrated in the U.S. lab and used during the Increments 16/17 (2008) and 33/34 (2012/2013). The ELITE S2 flight segment comprises equipment mounted into an Express Rack and a number of stowed items to be deployed for experiment performance (video cameras and accessories). The ground segment consists in a User Support Operations Center (based at Kayser Italia) enabling real-time payload control and a number of User Home Bases (located at the ASI and PIs premises), for the scientific assessment of the experiment performance. Two scientific protocols on reaching and cognitive processing have been successfully performed in eight sessions involving three ISS crewmembers: IMAGINE 2 and MOVE.
3D OCT imaging in clinical settings: toward quantitative measurements of retinal structures
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Fuller, Alfred R.; Zhao, Mingtao; Wiley, David F.; Choi, Stacey S.; Bower, Bradley A.; Hamann, Bernd; Izatt, Joseph A.; Werner, John S.
2006-02-01
The acquisition speed of current FD-OCT (Fourier Domain - Optical Coherence Tomography) instruments allows rapid screening of three-dimensional (3D) volumes of human retinas in clinical settings. To take advantage of this ability requires software used by physicians to be capable of displaying and accessing volumetric data as well as supporting post processing in order to access important quantitative information such as thickness maps and segmented volumes. We describe our clinical FD-OCT system used to acquire 3D data from the human retina over the macula and optic nerve head. B-scans are registered to remove motion artifacts and post-processed with customized 3D visualization and analysis software. Our analysis software includes standard 3D visualization techniques along with a machine learning support vector machine (SVM) algorithm that allows a user to semi-automatically segment different retinal structures and layers. Our program makes possible measurements of the retinal layer thickness as well as volumes of structures of interest, despite the presence of noise and structural deformations associated with retinal pathology. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases.
NASA Astrophysics Data System (ADS)
Cabrera Fernandez, Delia; Salinas, Harry M.; Somfai, Gabor; Puliafito, Carmen A.
2006-03-01
Optical coherence tomography (OCT) is a rapidly emerging medical imaging technology. In ophthalmology, OCT is a powerful tool because it enables visualization of the cross sectional structure of the retina and anterior eye with higher resolutions than any other non-invasive imaging modality. Furthermore, OCT image information can be quantitatively analyzed, enabling objective assessment of features such as macular edema and diabetes retinopathy. We present specific improvements in the quantitative analysis of the OCT system, by combining the diffusion equation with the free Shrödinger equation. In such formulation, important features of the image can be extracted by extending the analysis from the real axis to the complex domain. Experimental results indicate that our proposed novel approach has good performance in speckle noise removal, enhancement and segmentation of the various cellular layers of the retina using the OCT system.
Zhang, Zhijun; Zhu, Meihua; Ashraf, Muhammad; Broberg, Craig S; Sahn, David J; Song, Xubo
2014-12-01
Quantitative analysis of right ventricle (RV) motion is important for study of the mechanism of congenital and acquired diseases. Unlike left ventricle (LV), motion estimation of RV is more difficult because of its complex shape and thin myocardium. Although attempts of finite element models on MR images and speckle tracking on echocardiography have shown promising results on RV strain analysis, these methods can be improved since the temporal smoothness of the motion is not considered. The authors have proposed a temporally diffeomorphic motion estimation method in which a spatiotemporal transformation is estimated by optimization of a registration energy functional of the velocity field in their earlier work. The proposed motion estimation method is a fully automatic process for general image sequences. The authors apply the method by combining with a semiautomatic myocardium segmentation method to the RV strain analysis of three-dimensional (3D) echocardiographic sequences of five open-chest pigs under different steady states. The authors compare the peak two-point strains derived by their method with those estimated from the sonomicrometry, the results show that they have high correlation. The motion of the right ventricular free wall is studied by using segmental strains. The baseline sequence results show that the segmental strains in their methods are consistent with results obtained by other image modalities such as MRI. The image sequences of pacing steady states show that segments with the largest strain variation coincide with the pacing sites. The high correlation of the peak two-point strains of their method and sonomicrometry under different steady states demonstrates that their RV motion estimation has high accuracy. The closeness of the segmental strain of their method to those from MRI shows the feasibility of their method in the study of RV function by using 3D echocardiography. The strain analysis of the pacing steady states shows the potential utility of their method in study on RV diseases.
Caetano, Fabiana A; Dirk, Brennan S; Tam, Joshua H K; Cavanagh, P Craig; Goiko, Maria; Ferguson, Stephen S G; Pasternak, Stephen H; Dikeakos, Jimmy D; de Bruyn, John R; Heit, Bryan
2015-12-01
Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.
Learning a cost function for microscope image segmentation.
Nilufar, Sharmin; Perkins, Theodore J
2014-01-01
Quantitative analysis of microscopy images is increasingly important in clinical researchers' efforts to unravel the cellular and molecular determinants of disease, and for pathological analysis of tissue samples. Yet, manual segmentation and measurement of cells or other features in images remains the norm in many fields. We report on a new system that aims for robust and accurate semi-automated analysis of microscope images. A user interactively outlines one or more examples of a target object in a training image. We then learn a cost function for detecting more objects of the same type, either in the same or different images. The cost function is incorporated into an active contour model, which can efficiently determine optimal boundaries by dynamic programming. We validate our approach and compare it to some standard alternatives on three different types of microscopic images: light microscopy of blood cells, light microscopy of muscle tissue sections, and electron microscopy cross-sections of axons and their myelin sheaths.
Reliability of Semi-Automated Segmentations in Glioblastoma.
Huber, T; Alber, G; Bette, S; Boeckh-Behrens, T; Gempt, J; Ringel, F; Alberts, E; Zimmer, C; Bauer, J S
2017-06-01
In glioblastoma, quantitative volumetric measurements of contrast-enhancing or fluid-attenuated inversion recovery (FLAIR) hyperintense tumor compartments are needed for an objective assessment of therapy response. The aim of this study was to evaluate the reliability of a semi-automated, region-growing segmentation tool for determining tumor volume in patients with glioblastoma among different users of the software. A total of 320 segmentations of tumor-associated FLAIR changes and contrast-enhancing tumor tissue were performed by different raters (neuroradiologists, medical students, and volunteers). All patients underwent high-resolution magnetic resonance imaging including a 3D-FLAIR and a 3D-MPRage sequence. Segmentations were done using a semi-automated, region-growing segmentation tool. Intra- and inter-rater-reliability were addressed by intra-class-correlation (ICC). Root-mean-square error (RMSE) was used to determine the precision error. Dice score was calculated to measure the overlap between segmentations. Semi-automated segmentation showed a high ICC (> 0.985) for all groups indicating an excellent intra- and inter-rater-reliability. Significant smaller precision errors and higher Dice scores were observed for FLAIR segmentations compared with segmentations of contrast-enhancement. Single rater segmentations showed the lowest RMSE for FLAIR of 3.3 % (MPRage: 8.2 %). Both, single raters and neuroradiologists had the lowest precision error for longitudinal evaluation of FLAIR changes. Semi-automated volumetry of glioblastoma was reliably performed by all groups of raters, even without neuroradiologic expertise. Interestingly, segmentations of tumor-associated FLAIR changes were more reliable than segmentations of contrast enhancement. In longitudinal evaluations, an experienced rater can detect progressive FLAIR changes of less than 15 % reliably in a quantitative way which could help to detect progressive disease earlier.
NASA Astrophysics Data System (ADS)
Wierts, R.; Jentzen, W.; Quick, H. H.; Wisselink, H. J.; Pooters, I. N. A.; Wildberger, J. E.; Herrmann, K.; Kemerink, G. J.; Backes, W. H.; Mottaghy, F. M.
2018-01-01
The aim was to investigate the quantitative performance of 124I PET/MRI for pre-therapy lesion dosimetry in differentiated thyroid cancer (DTC). Phantom measurements were performed on a PET/MRI system (Biograph mMR, Siemens Healthcare) using 124I and 18F. The PET calibration factor and the influence of radiofrequency coil attenuation were determined using a cylindrical phantom homogeneously filled with radioactivity. The calibration factor was 1.00 ± 0.02 for 18F and 0.88 ± 0.02 for 124I. Near the radiofrequency surface coil an underestimation of less than 5% in radioactivity concentration was observed. Soft-tissue sphere recovery coefficients were determined using the NEMA IEC body phantom. Recovery coefficients were systematically higher for 18F than for 124I. In addition, the six spheres of the phantom were segmented using a PET-based iterative segmentation algorithm. For all 124I measurements, the deviations in segmented lesion volume and mean radioactivity concentration relative to the actual values were smaller than 15% and 25%, respectively. The effect of MR-based attenuation correction (three- and four-segment µ-maps) on bone lesion quantification was assessed using radioactive spheres filled with a K2HPO4 solution mimicking bone lesions. The four-segment µ-map resulted in an underestimation of the imaged radioactivity concentration of up to 15%, whereas the three-segment µ-map resulted in an overestimation of up to 10%. For twenty lesions identified in six patients, a comparison of 124I PET/MRI to PET/CT was performed with respect to segmented lesion volume and radioactivity concentration. The interclass correlation coefficients showed excellent agreement in segmented lesion volume and radioactivity concentration (0.999 and 0.95, respectively). In conclusion, it is feasible that accurate quantitative 124I PET/MRI could be used to perform radioiodine pre-therapy lesion dosimetry in DTC.
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
El-Merhi, Fadi; Mohamad, May; Haydar, Ali; Naffaa, Lena; Nasr, Rami; Deeb, Ibrahim Al-Sheikh; Hamieh, Nadine; Tayara, Ziad; Saade, Charbel
2018-04-01
To evaluate the performance of non-contrast computed tomography (CT) by reporting the difference in attenuation between normal and inflamed renal parenchyma in patients clinically diagnosed with acute pyelonephritis (APN). This is a retrospective study concerned with non-contrast CT evaluation of 74 patients, admitted with a clinical diagnosis of APN and failed to respond to 48h antibiotics treatment. Mean attenuation values in Hounsfield units (HU) were measured in the upper, middle and lower segments of the inflamed and the normal kidney of the same patient. Independent t-test was performed for statistical analysis. Image evaluation included receiver operating characteristic (ROC), visual grading characteristic (VGC) and kappa analyses. The mean attenuation in the upper, middle and lower segments of the inflamed renal cortex was 32%, 25%, and 29% lower than the mean attenuation of the corresponding cortical segments of the contralateral normal kidney, respectively (p<0.01). The mean attenuation in the upper, middle, and lower segments of the inflamed renal medulla was 48%, 21%, and 30%, lower than the mean attenuation of the corresponding medullary segments of the contralateral normal kidney (p<0.02). The mean attenuation between the inflamed and non-inflamed renal cortex and medulla was 29% and 30% lower respectively (p<0.001). The AUCROC (p<0.001) analysis demonstrated significantly higher scores for pathology detection, irrespective of image quality, compared to clinical and laboratory results with an increased inter-reader agreement from poor to substantial. Non-contrast CT showed a significant decrease in the parenchymal density of the kidney affected with APN in comparison to the contralateral normal kidney of the same patient. This can be incorporated in the diagnostic criteria of APN in NCCT in the emergency setting. Copyright © 2017 Elsevier Inc. All rights reserved.
Computer-aided pulmonary image analysis in small animal models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.
Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next.more » The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.« less
PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.
Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina
2017-11-01
Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.
Venhuizen, Freerk G; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I
2018-04-01
We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies.
Venhuizen, Freerk G.; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.
2018-01-01
We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies. PMID:29675301
A review of automatic mass detection and segmentation in mammographic images.
Oliver, Arnau; Freixenet, Jordi; Martí, Joan; Pérez, Elsa; Pont, Josep; Denton, Erika R E; Zwiggelaar, Reyer
2010-04-01
The aim of this paper is to review existing approaches to the automatic detection and segmentation of masses in mammographic images, highlighting the key-points and main differences between the used strategies. The key objective is to point out the advantages and disadvantages of the various approaches. In contrast with other reviews which only describe and compare different approaches qualitatively, this review also provides a quantitative comparison. The performance of seven mass detection methods is compared using two different mammographic databases: a public digitised database and a local full-field digital database. The results are given in terms of Receiver Operating Characteristic (ROC) and Free-response Receiver Operating Characteristic (FROC) analysis. Copyright 2009 Elsevier B.V. All rights reserved.
Detection of Focal Cortical Dysplasia Lesions in MRI Using Textural Features
NASA Astrophysics Data System (ADS)
Loyek, Christian; Woermann, Friedrich G.; Nattkemper, Tim W.
Focal cortical dysplasia (FCD) is a frequent cause of medically refractory partial epilepsy. The visual identification of FCD lesions on magnetic resonance images (MRI) is a challenging task in standard radiological analysis. Quantitative image analysis which tries to assist in the diagnosis of FCD lesions is an active field of research. In this work we investigate the potential of different texture features, in order to explore to what extent they are suitable for detecting lesional tissue. As a result we can show first promising results based on segmentation and texture classification.
Glial brain tumor detection by using symmetry analysis
NASA Astrophysics Data System (ADS)
Pedoia, Valentina; Binaghi, Elisabetta; Balbi, Sergio; De Benedictis, Alessandro; Monti, Emanuele; Minotto, Renzo
2012-02-01
In this work a fully automatic algorithm to detect brain tumors by using symmetry analysis is proposed. In recent years a great effort of the research in field of medical imaging was focused on brain tumors segmentation. The quantitative analysis of MRI brain tumor allows to obtain useful key indicators of disease progression. The complex problem of segmenting tumor in MRI can be successfully addressed by considering modular and multi-step approaches mimicking the human visual inspection process. The tumor detection is often an essential preliminary phase to solvethe segmentation problem successfully. In visual analysis of the MRI, the first step of the experts cognitive process, is the detection of an anomaly respect the normal tissue, whatever its nature. An healthy brain has a strong sagittal symmetry, that is weakened by the presence of tumor. The comparison between the healthy and ill hemisphere, considering that tumors are generally not symmetrically placed in both hemispheres, was used to detect the anomaly. A clustering method based on energy minimization through Graph-Cut is applied on the volume computed as a difference between the left hemisphere and the right hemisphere mirrored across the symmetry plane. Differential analysis involves the loss the knowledge of the tumor side. Through an histogram analysis the ill hemisphere is recognized. Many experiments are performed to assess the performance of the detection strategy on MRI volumes in presence of tumors varied in terms of shapes positions and intensity levels. The experiments showed good results also in complex situations.
Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma
Dunn, William D.; Aerts, Hugo J.W.L.; Cooper, Lee A.; Holder, Chad A.; Hwang, Scott N.; Jaffe, Carle C.; Brat, Daniel J.; Jain, Rajan; Flanders, Adam E.; Zinn, Pascal O.; Colen, Rivka R.; Gutman, David A.
2017-01-01
Background Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman’s r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses. PMID:29600296
Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm
NASA Astrophysics Data System (ADS)
Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.
2011-10-01
Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.
NASA Astrophysics Data System (ADS)
Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming
2017-11-01
Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.
NASA Astrophysics Data System (ADS)
Johri, Ansh; Schimel, Daniel; Noguchi, Audrey; Hsu, Lewis L.
2010-03-01
Imaging is a crucial clinical tool for diagnosis and assessment of pneumonia, but quantitative methods are lacking. Micro-computed tomography (micro CT), designed for lab animals, provides opportunities for non-invasive radiographic endpoints for pneumonia studies. HYPOTHESIS: In vivo micro CT scans of mice with early bacterial pneumonia can be scored quantitatively by semiautomated imaging methods, with good reproducibility and correlation with bacterial dose inoculated, pneumonia survival outcome, and radiologists' scores. METHODS: Healthy mice had intratracheal inoculation of E. coli bacteria (n=24) or saline control (n=11). In vivo micro CT scans were performed 24 hours later with microCAT II (Siemens). Two independent radiologists scored the extent of airspace abnormality, on a scale of 0 (normal) to 24 (completely abnormal). Using the Amira 5.2 software (Mercury Computer Systems), a histogram distribution of voxel counts between the Hounsfield range of -510 to 0 was created and analyzed, and a segmentation procedure was devised. RESULTS: A t-test was performed to determine whether there was a significant difference in the mean voxel value of each mouse in the three experimental groups: Saline Survivors, Pneumonia Survivors, and Pneumonia Non-survivors. It was found that the voxel count method was able to statistically tell apart the Saline Survivors from the Pneumonia Survivors, the Saline Survivors from the Pneumonia Non-survivors, but not the Pneumonia Survivors vs. Pneumonia Non-survivors. The segmentation method, however, was successfully able to distinguish the two Pneumonia groups. CONCLUSION: We have pilot-tested an evaluation of early pneumonia in mice using micro CT and a semi-automated method for lung segmentation and scoring system. Statistical analysis indicates that the system is reliable and merits further evaluation.
Liu, Jie; Zhuang, Xiahai; Wu, Lianming; An, Dongaolei; Xu, Jianrong; Peters, Terry; Gu, Lixu
2017-11-01
Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients. Objective: In this paper, we propose a fully automatic framework for myocardium segmentation of delayed-enhancement (DE) MRI images without relying on prior patient-specific information. Methods: We employ a multicomponent Gaussian mixture model to deal with the intensity heterogeneity of myocardium caused by the infarcts. To differentiate the myocardium from other tissues with similar intensities, while at the same time maintain spatial continuity, we introduce a coupled level set (CLS) to regularize the posterior probability. The CLS, as a spatial regularization, can be adapted to the image characteristics dynamically. We also introduce an image intensity gradient based term into the CLS, adding an extra force to the posterior probability based framework, to improve the accuracy of myocardium boundary delineation. The prebuilt atlases are propagated to the target image to initialize the framework. Results: The proposed method was tested on datasets of 22 clinical cases, and achieved Dice similarity coefficients of 87.43 ± 5.62% (endocardium), 90.53 ± 3.20% (epicardium) and 73.58 ± 5.58% (myocardium), which have outperformed three variants of the classic segmentation methods. Conclusion: The results can provide a benchmark for the myocardial segmentation in the literature. Significance: DE MRI provides an important tool to assess the viability of myocardium. The accurate segmentation of myocardium, which is a prerequisite for further quantitative analysis of myocardial infarction (MI) region, can provide important support for the diagnosis and treatment management for MI patients.
Ochiai, K; Uemura, S; Shimizu, A; Okumoto, Y; Matoh, T
2008-06-01
Boron toxicity tolerance of rice plants was studied. Modern japonica subspecies such as Koshihikari, Nipponbare, and Sasanishiki were tolerant, whereas indica subspecies such as Kasalath and IR36 were intolerant to excessive application of boron (B), even though their shoot B contents under B toxicity were not significantly different. Recombinant inbred lines (RILs) of japonica Nekken-1 and indica IR36 were used for quantitative trait locus (QTL) analysis to identify the gene responsible for B toxicity tolerance. A major QTL that could explain 45% of the phenotypic variation was detected in chromosome 4. The QTL was confirmed using a population derived from a recombinant inbred line which is heterogenic at the QTL region. The QTL was also confirmed in other chromosome segment substitution lines (CSSLs).
Corneal topography with high-speed swept source OCT in clinical examination
Karnowski, Karol; Kaluzny, Bartlomiej J.; Szkulmowski, Maciej; Gora, Michalina; Wojtkowski, Maciej
2011-01-01
We present the applicability of high-speed swept source (SS) optical coherence tomography (OCT) for quantitative evaluation of the corneal topography. A high-speed OCT device of 108,000 lines/s permits dense 3D imaging of the anterior segment within a time period of less than one fourth of second, minimizing the influence of motion artifacts on final images and topographic analysis. The swept laser performance was specially adapted to meet imaging depth requirements. For the first time to our knowledge the results of a quantitative corneal analysis based on SS OCT for clinical pathologies such as keratoconus, a cornea with superficial postinfectious scar, and a cornea 5 months after penetrating keratoplasty are presented. Additionally, a comparison with widely used commercial systems, a Placido-based topographer and a Scheimpflug imaging-based topographer, is demonstrated. PMID:21991558
Isolation and characterization of cDNA clones for human erythrocyte beta-spectrin.
Prchal, J T; Morley, B J; Yoon, S H; Coetzer, T L; Palek, J; Conboy, J G; Kan, Y W
1987-01-01
Spectrin is an important structural component of the membrane skeleton that underlies and supports the erythrocyte plasma membrane. It is composed of nonidentical alpha (Mr 240,000) and beta (Mr 220,000) subunits, each of which contains multiple homologous 106-amino acid segments. We report here the isolation and characterization of a human erythroid-specific beta-spectrin cDNA clone that encodes parts of the beta-9 through beta-12 repeat segments. This cDNA was used as a hybridization probe to assign the beta-spectrin gene to human chromosome 14 and to begin molecular analysis of the gene and its mRNA transcripts. RNA transfer blot analysis showed that the reticulocyte beta-spectrin mRNA is 7.8 kilobases in length. Southern blot analysis of genomic DNA revealed the presence of restriction fragment length polymorphisms (RFLPs) within the beta-spectrin gene locus. The isolation of human spectrin cDNA probes and the identification of closely linked RFLPs will facilitate analysis of mutant spectrin genes causing congenital hemolytic anemias associated with quantitative and qualitative spectrin abnormalities. Images PMID:3478706
NASA Astrophysics Data System (ADS)
Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko
2017-06-01
The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600
Planer, David; Mehran, Roxana; Ohman, E Magnus; White, Harvey D; Newman, Jonathan D; Xu, Ke; Stone, Gregg W
2014-06-01
Troponin elevation is a risk factor for mortality in patients with non-ST-segment-elevation acute coronary syndromes. However, the prognosis of patients with troponin elevation and nonobstructive coronary artery disease (CAD) is unknown. Our objective was therefore to evaluate the impact of nonobstructive CAD in patients with non-ST-segment-elevation acute coronary syndromes and troponin elevation enrolled in the Acute Catheterization and Urgent Intervention Triage Strategy (ACUITY) trial. In the ACUITY trial, 3-vessel quantitative coronary angiography was performed in a formal substudy of 6921 patients presenting with non-ST-segment-elevation acute coronary syndromes. Patients with elevated admission troponin levels were stratified by the presence or absence of obstructive CAD (any lesion with quantitative diameter stenosis >50%). Propensity score matching was performed to adjust for baseline characteristics. Of 2442 patients with elevated troponin, 197 (8.8%) had nonobstructive CAD. Maximum diameter stenosis was 87.4 (73.2, 100.0) versus 22.6 (19.2, 25.7; P<0.0001) in patients with versus without obstructive CAD, respectively. Propensity matching yielded 117 patients with nonobstructive CAD and 331 patients with obstructive CAD, with no significant baseline differences between groups. In the matched cohort, overall 1-year mortality was significantly higher in patients with nonobstructive CAD (5.2% versus 1.6%; hazard ratio [95% confidence interval]=3.44 [1.05, 11.28]; P=0.04), driven by greater noncardiac mortality. Conversely, recurrent myocardial infarction and unplanned revascularization rates were significantly higher in patients with obstructive CAD. Patients with non-ST-segment-elevation acute coronary syndromes and elevated troponin levels but without obstructive CAD, while having low rates of subsequent myocardial infarction and unplanned revascularization, are still at considerable risk for 1-year mortality from noncardiac causes. http://www.clinicaltrials.gov. Unique identifier: NCT00093158. © 2014 American Heart Association, Inc.
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.
Localized Charges Control Exciton Energetics and Energy Dissipation in Doped Carbon Nanotubes.
Eckstein, Klaus H; Hartleb, Holger; Achsnich, Melanie M; Schöppler, Friedrich; Hertel, Tobias
2017-10-24
Doping by chemical or physical means is key for the development of future semiconductor technologies. Ideally, charge carriers should be able to move freely in a homogeneous environment. Here, we report on evidence suggesting that excess carriers in electrochemically p-doped semiconducting single-wall carbon nanotubes (s-SWNTs) become localized, most likely due to poorly screened Coulomb interactions with counterions in the Helmholtz layer. A quantitative analysis of blue-shift, broadening, and asymmetry of the first exciton absorption band also reveals that doping leads to hard segmentation of s-SWNTs with intrinsic undoped segments being separated by randomly distributed charge puddles approximately 4 nm in width. Light absorption in these doped segments is associated with the formation of trions, spatially separated from neutral excitons. Acceleration of exciton decay in doped samples is governed by diffusive exciton transport to, and nonradiative decay at charge puddles within 3.2 ps in moderately doped s-SWNTs. The results suggest that conventional band-filling in s-SWNTs breaks down due to inhomogeneous electrochemical doping.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
Linguraru, Marius George; Pura, John A; Chowdhury, Ananda S; Summers, Ronald M
2010-01-01
The interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis (CAD) applications. Diagnosis also relies on the comprehensive analysis of multiple organs and quantitative measures of soft tissue. An automated method optimized for medical image data is presented for the simultaneous segmentation of four abdominal organs from 4D CT data using graph cuts. Contrast-enhanced CT scans were obtained at two phases: non-contrast and portal venous. Intra-patient data were spatially normalized by non-linear registration. Then 4D erosion using population historic information of contrast-enhanced liver, spleen, and kidneys was applied to multi-phase data to initialize the 4D graph and adapt to patient specific data. CT enhancement information and constraints on shape, from Parzen windows, and location, from a probabilistic atlas, were input into a new formulation of a 4D graph. Comparative results demonstrate the effects of appearance and enhancement, and shape and location on organ segmentation.
NASA Astrophysics Data System (ADS)
Irvine, John M.; Ghadar, Nastaran; Duncan, Steve; Floyd, David; O'Dowd, David; Lin, Kristie; Chang, Tom
2017-03-01
Quantitative biomarkers for assessing the presence, severity, and progression of age-related macular degeneration (AMD) would benefit research, diagnosis, and treatment. This paper explores development of quantitative biomarkers derived from OCT imagery of the retina. OCT images for approximately 75 patients with Wet AMD, Dry AMD, and no AMD (healthy eyes) were analyzed to identify image features indicative of the patients' conditions. OCT image features provide a statistical characterization of the retina. Healthy eyes exhibit a layered structure, whereas chaotic patterns indicate the deterioration associated with AMD. Our approach uses wavelet and Frangi filtering, combined with statistical features that do not rely on image segmentation, to assess patient conditions. Classification analysis indicates clear separability of Wet AMD from other conditions, including Dry AMD and healthy retinas. The probability of correct classification of was 95.7%, as determined from cross validation. Similar classification analysis predicts the response of Wet AMD patients to treatment, as measured by the Best Corrected Visual Acuity (BCVA). A statistical model predicts BCVA from the imagery features with R2 = 0.846. Initial analysis of OCT imagery indicates that imagery-derived features can provide useful biomarkers for characterization and quantification of AMD: Accurate assessment of Wet AMD compared to other conditions; image-based prediction of outcome for Wet AMD treatment; and features derived from the OCT imagery accurately predict BCVA; unlike many methods in the literature, our techniques do not rely on segmentation of the OCT image. Next steps include larger scale testing and validation.
A variational approach to liver segmentation using statistics from multiple sources
NASA Astrophysics Data System (ADS)
Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi
2018-01-01
Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.
Rajab, Maher I
2011-11-01
Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.
Survey of contemporary trends in color image segmentation
NASA Astrophysics Data System (ADS)
Vantaram, Sreenath Rao; Saber, Eli
2012-10-01
In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.
White matter lesion extension to automatic brain tissue segmentation on MRI.
de Boer, Renske; Vrooman, Henri A; van der Lijn, Fedde; Vernooij, Meike W; Ikram, M Arfan; van der Lugt, Aad; Breteler, Monique M B; Niessen, Wiro J
2009-05-01
A fully automated brain tissue segmentation method is optimized and extended with white matter lesion segmentation. Cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM) are segmented by an atlas-based k-nearest neighbor classifier on multi-modal magnetic resonance imaging data. This classifier is trained by registering brain atlases to the subject. The resulting GM segmentation is used to automatically find a white matter lesion (WML) threshold in a fluid-attenuated inversion recovery scan. False positive lesions are removed by ensuring that the lesions are within the white matter. The method was visually validated on a set of 209 subjects. No segmentation errors were found in 98% of the brain tissue segmentations and 97% of the WML segmentations. A quantitative evaluation using manual segmentations was performed on a subset of 6 subjects for CSF, GM and WM segmentation and an additional 14 for the WML segmentations. The results indicated that the automatic segmentation accuracy is close to the interobserver variability of manual segmentations.
[Bacterial biofilms on PVC tubing's inner surface of hemodialysis water treatment system].
Yang, Sha; Jia, Ke; Peng, Youming; Liu, Hong; Liu, Yinghong; Chen, Xing; Liu, Fuyou
2009-10-01
To determine the morphology, bacteria and endotoxin content of biofilms on the inner surface of PVC tubes in hemodialysis water treatment system. We dissolved biofilms of segments before and after reverse osmosis machine for bacterial count and identification. We studied biofilm structure of segments before and after reverse osmosis machine with eyes and scanning electron microscope. Biofilms of all 7 segments were dissolved for qualitative and quantitative assay of endotoxin. The inner surface of segment before reverse osmosis machine was homogeneously distributed with activated carbon powder deposition. The segment after reverse osmosis machine was normal. With scanning electron microscope, biofilm with successive surface and sandwich was found on the inner surface of segment before reverse osmosis machine, formed by clustering bacillus, activated carbon powder and some coccus. Bacteria of the same shape and length were found on segment after reverse osmosis machine, but fewer and looser. Bacterial culture and identification showed the former was mostly gram-negative bacillus, the latter was only a few micrococcus. Endotoxin of biofilm was between 2.0 EU/mL and 4.0 EU/mL. Quantitative assay showed: segment after softener (2.821+/-0.807) EU/mL; segment after active charcoal canister(3.635+/-0.427) EU/mL; segment before reverse osmosis machine (3.687+/-0.271) EU/mL; segment after reverse osmosis machine (2.041+/-0.295) EU/mL; exit of power pump (1.983+/-0.390)EU/mL;the 1st dead space (2.373+/-0.535) EU/mL; and the 2nd dead space (2.858+/-0.690)EU/mL. Biofilms are found on the inner surface of segment before and after reverse osmosis machine. Endotoxin level from high to low is as follows: segment before reverse osmosis machine, segment after active charcoal canister, the 2nd dead space, segment after softener, the 1st dead space, segment after reverse osmosis machine, exit of power pump. The character of the bacteria and endotoxin of the biofilm can help us find better ways to control them.
Wang, Rui; Meinel, Felix G; Schoepf, U Joseph; Canstein, Christian; Spearman, James V; De Cecco, Carlo N
2015-12-01
To evaluate the accuracy, reliability and time saving potential of a novel cardiac CT (CCT)-based, automated software for the assessment of segmental left ventricular function compared to visual and manual quantitative assessment of CCT and cardiac magnetic resonance (CMR). Forty-seven patients with suspected or known coronary artery disease (CAD) were enrolled in the study. Wall thickening was calculated. Segmental LV wall motion was automatically calculated and shown as a colour-coded polar map. Processing time for each method was recorded. Mean wall thickness in both systolic and diastolic phases on polar map, CCT, and CMR was 9.2 ± 0.1 mm and 14.9 ± 0.2 mm, 8.9 ± 0.1 mm and 14.5 ± 0.1 mm, 8.3 ± 0.1 mm and 13.6 ± 0.1 mm, respectively. Mean wall thickening was 68.4 ± 1.5 %, 64.8 ± 1.4 % and 67.1 ± 1.4 %, respectively. Agreement for the assessment of LV wall motion between CCT, CMR and polar maps was good. Bland-Altman plots and ICC indicated good agreement between CCT, CMR and automated polar maps of the diastolic and systolic segmental wall thickness and thickening. The processing time using polar map was significantly decreased compared with CCT and CMR. Automated evaluation of segmental LV function with polar maps provides similar measurements to manual CCT and CMR evaluation, albeit with substantially reduced analysis time. • Cardiac computed tomography (CCT) can accurately assess segmental left ventricular wall function. • A novel automated software permits accurate and fast evaluation of wall function. • The software may improve the clinical implementation of segmental functional analysis.
Ureter tracking and segmentation in CT urography (CTU) using COMPASS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadjiiski, Lubomir, E-mail: lhadjisk@umich.edu; Zick, David; Chan, Heang-Ping
2014-12-15
Purpose: The authors are developing a computerized system for automated segmentation of ureters in CTU, referred to as combined model-guided path-finding analysis and segmentation system (COMPASS). Ureter segmentation is a critical component for computer-aided diagnosis of ureter cancer. Methods: COMPASS consists of three stages: (1) rule-based adaptive thresholding and region growing, (2) path-finding and propagation, and (3) edge profile extraction and feature analysis. With institutional review board approval, 79 CTU scans performed with intravenous (IV) contrast material enhancement were collected retrospectively from 79 patient files. One hundred twenty-four ureters were selected from the 79 CTU volumes. On average, the uretersmore » spanned 283 computed tomography slices (range: 116–399, median: 301). More than half of the ureters contained malignant or benign lesions and some had ureter wall thickening due to malignancy. A starting point for each of the 124 ureters was identified manually to initialize the tracking by COMPASS. In addition, the centerline of each ureter was manually marked and used as reference standard for evaluation of tracking performance. The performance of COMPASS was quantitatively assessed by estimating the percentage of the length that was successfully tracked and segmented for each ureter and by estimating the average distance and the average maximum distance between the computer and the manually tracked centerlines. Results: Of the 124 ureters, 120 (97%) were segmented completely (100%), 121 (98%) were segmented through at least 70%, and 123 (99%) were segmented through at least 50% of its length. In comparison, using our previous method, 85 (69%) ureters were segmented completely (100%), 100 (81%) were segmented through at least 70%, and 107 (86%) were segmented at least 50% of its length. With COMPASS, the average distance between the computer and the manually generated centerlines is 0.54 mm, and the average maximum distance is 2.02 mm. With our previous method, the average distance between the centerlines was 0.80 mm, and the average maximum distance was 3.38 mm. The improvements in the ureteral tracking length and both distance measures were statistically significant (p < 0.0001). Conclusions: COMPASS improved significantly the ureter tracking, including regions across ureter lesions, wall thickening, and the narrowing of the lumen.« less
Interactive Volumetry Of Liver Ablation Zones.
Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Hann, Alexander; Chen, Xiaojun; Alhonnoro, Tuomas; Pollari, Mika; Schmalstieg, Dieter; Moche, Michael
2015-10-20
Percutaneous radiofrequency ablation (RFA) is a minimally invasive technique that destroys cancer cells by heat. The heat results from focusing energy in the radiofrequency spectrum through a needle. Amongst others, this can enable the treatment of patients who are not eligible for an open surgery. However, the possibility of recurrent liver cancer due to incomplete ablation of the tumor makes post-interventional monitoring via regular follow-up scans mandatory. These scans have to be carefully inspected for any conspicuousness. Within this study, the RF ablation zones from twelve post-interventional CT acquisitions have been segmented semi-automatically to support the visual inspection. An interactive, graph-based contouring approach, which prefers spherically shaped regions, has been applied. For the quantitative and qualitative analysis of the algorithm's results, manual slice-by-slice segmentations produced by clinical experts have been used as the gold standard (which have also been compared among each other). As evaluation metric for the statistical validation, the Dice Similarity Coefficient (DSC) has been calculated. The results show that the proposed tool provides lesion segmentation with sufficient accuracy much faster than manual segmentation. The visual feedback and interactivity make the proposed tool well suitable for the clinical workflow.
Interactive Volumetry Of Liver Ablation Zones
Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Hann, Alexander; Chen, Xiaojun; Alhonnoro, Tuomas; Pollari, Mika; Schmalstieg, Dieter; Moche, Michael
2015-01-01
Percutaneous radiofrequency ablation (RFA) is a minimally invasive technique that destroys cancer cells by heat. The heat results from focusing energy in the radiofrequency spectrum through a needle. Amongst others, this can enable the treatment of patients who are not eligible for an open surgery. However, the possibility of recurrent liver cancer due to incomplete ablation of the tumor makes post-interventional monitoring via regular follow-up scans mandatory. These scans have to be carefully inspected for any conspicuousness. Within this study, the RF ablation zones from twelve post-interventional CT acquisitions have been segmented semi-automatically to support the visual inspection. An interactive, graph-based contouring approach, which prefers spherically shaped regions, has been applied. For the quantitative and qualitative analysis of the algorithm’s results, manual slice-by-slice segmentations produced by clinical experts have been used as the gold standard (which have also been compared among each other). As evaluation metric for the statistical validation, the Dice Similarity Coefficient (DSC) has been calculated. The results show that the proposed tool provides lesion segmentation with sufficient accuracy much faster than manual segmentation. The visual feedback and interactivity make the proposed tool well suitable for the clinical workflow. PMID:26482818
Interactive Volumetry Of Liver Ablation Zones
NASA Astrophysics Data System (ADS)
Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Hann, Alexander; Chen, Xiaojun; Alhonnoro, Tuomas; Pollari, Mika; Schmalstieg, Dieter; Moche, Michael
2015-10-01
Percutaneous radiofrequency ablation (RFA) is a minimally invasive technique that destroys cancer cells by heat. The heat results from focusing energy in the radiofrequency spectrum through a needle. Amongst others, this can enable the treatment of patients who are not eligible for an open surgery. However, the possibility of recurrent liver cancer due to incomplete ablation of the tumor makes post-interventional monitoring via regular follow-up scans mandatory. These scans have to be carefully inspected for any conspicuousness. Within this study, the RF ablation zones from twelve post-interventional CT acquisitions have been segmented semi-automatically to support the visual inspection. An interactive, graph-based contouring approach, which prefers spherically shaped regions, has been applied. For the quantitative and qualitative analysis of the algorithm’s results, manual slice-by-slice segmentations produced by clinical experts have been used as the gold standard (which have also been compared among each other). As evaluation metric for the statistical validation, the Dice Similarity Coefficient (DSC) has been calculated. The results show that the proposed tool provides lesion segmentation with sufficient accuracy much faster than manual segmentation. The visual feedback and interactivity make the proposed tool well suitable for the clinical workflow.
Quantitative Immunofluorescence Analysis of Nucleolus-Associated Chromatin.
Dillinger, Stefan; Németh, Attila
2016-01-01
The nuclear distribution of eu- and heterochromatin is nonrandom, heterogeneous, and dynamic, which is mirrored by specific spatiotemporal arrangements of histone posttranslational modifications (PTMs). Here we describe a semiautomated method for the analysis of histone PTM localization patterns within the mammalian nucleus using confocal laser scanning microscope images of fixed, immunofluorescence stained cells as data source. The ImageJ-based process includes the segmentation of the nucleus, furthermore measurements of total fluorescence intensities, the heterogeneity of the staining, and the frequency of the brightest pixels in the region of interest (ROI). In the presented image analysis pipeline, the perinucleolar chromatin is selected as primary ROI, and the nuclear periphery as secondary ROI.
Quantitative learning strategies based on word networks
NASA Astrophysics Data System (ADS)
Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng
2018-02-01
Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.
Quantitative Imaging In Pathology (QUIP) | Informatics Technology for Cancer Research (ITCR)
This site hosts web accessible applications, tools and data designed to support analysis, management, and exploration of whole slide tissue images for cancer research. The following tools are included: caMicroscope: A digital pathology data management and visualization plaform that enables interactive viewing of whole slide tissue images and segmentation results. caMicroscope can be also used independently of QUIP. FeatureExplorer: An interactive tool to allow patient-level feature exploration across multiple dimensions.
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
Anguera, M Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2018-01-01
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts.
Faster embryonic segmentation through elevated Delta-Notch signalling
Liao, Bo-Kai; Jörg, David J.; Oates, Andrew C.
2016-01-01
An important step in understanding biological rhythms is the control of period. A multicellular, rhythmic patterning system termed the segmentation clock is thought to govern the sequential production of the vertebrate embryo's body segments, the somites. Several genetic loss-of-function conditions, including the Delta-Notch intercellular signalling mutants, result in slower segmentation. Here, we generate DeltaD transgenic zebrafish lines with a range of copy numbers and correspondingly increased signalling levels, and observe faster segmentation. The highest-expressing line shows an altered oscillating gene expression wave pattern and shortened segmentation period, producing embryos with more, shorter body segments. Our results reveal surprising differences in how Notch signalling strength is quantitatively interpreted in different organ systems, and suggest a role for intercellular communication in regulating the output period of the segmentation clock by altering its spatial pattern. PMID:27302627
Andriantahina, Farafidy; Liu, Xiaolin; Huang, Hao
2013-01-01
Growth is a priority trait from the point of view of genetic improvement. Molecular markers linked to quantitative trait loci (QTL) have been regarded as useful for marker-assisted selection (MAS) in complex traits as growth. Using an intermediate F2 cross of slow and fast growth parents, a genetic linkage map of Pacific whiteleg shrimp, Litopenaeusvannamei , based on amplified fragment length polymorphisms (AFLP) and simple sequence repeats (SSR) markers was constructed. Meanwhile, QTL analysis was performed for growth-related traits. The linkage map consisted of 451 marker loci (429 AFLPs and 22 SSRs) which formed 49 linkage groups with an average marker space of 7.6 cM; they spanned a total length of 3627.6 cM, covering 79.50% of estimated genome size. 14 QTLs were identified for growth-related traits, including three QTLs for body weight (BW), total length (TL) and partial carapace length (PCL), two QTLs for body length (BL), one QTL for first abdominal segment depth (FASD), third abdominal segment depth (TASD) and first abdominal segment width (FASW), which explained 2.62 to 61.42% of phenotypic variation. Moreover, comparison of linkage maps between L . vannamei and Penaeus japonicus was applied, providing a new insight into the genetic base of QTL affecting the growth-related traits. The new results will be useful for conducting MAS breeding schemes in L . vannamei . PMID:24086466
Wolak, Arik; Slomka, Piotr J; Fish, Mathews B; Lorenzo, Santiago; Berman, Daniel S; Germano, Guido
2008-06-01
Attenuation correction (AC) for myocardial perfusion SPECT (MPS) had not been evaluated separately in women despite specific considerations in this group because of breast photon attenuation. We aimed to evaluate the performance of AC in women by using automated quantitative analysis of MPS to avoid any bias. Consecutive female patients--134 with a low likelihood (LLk) of coronary artery disease (CAD) and 114 with coronary angiography performed within less than 3 mo of MPS--who were referred for rest-stress electrocardiography-gated 99mTc-sestamibi MPS with AC were considered. Imaging data were evaluated for contour quality control. An additional 50 LLk studies in women were used to create equivalent normal limits for studies with AC and with no correction (NC). An experienced technologist unaware of the angiography and other results performed the contour quality control. All other processing was performed in a fully automated manner. Quantitative analysis was performed with the Cedars-Sinai myocardial perfusion analysis package. All automated segmental analyses were performed with the 17-segment, 5-point American Heart Association model. Summed stress scores (SSS) of > or =3 were considered abnormal. CAD (> or =70% stenosis) was present in 69 of 114 patients (60%). The normalcy rates were 93% for both NC and AC studies. The SSS for patients with CAD and without CAD for NC versus AC were 10.0 +/- 9.0 (mean +/- SD) versus 10.2 +/- 8.5 and 1.6 +/- 2.3 versus 1.8 +/- 2.5, respectively; P was not significant (NS) for all comparisons of NC versus AC. The SSS for LLk patients for NC versus AC were 0.51 +/- 1.0 versus 0.6 +/- 1.1, respectively; P was NS. The specificity for both NC and AC was 73%. The sensitivities for NC and AC were 80% and 81%, respectively, and the accuracies for NC and AC were 77% and 78%, respectively; P was NS for both comparisons. There are no significant diagnostic differences between automated quantitative MPS analyses performed in studies processed with and without AC in women.
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation.
Roy, Snehashis; He, Qing; Sweeney, Elizabeth; Carass, Aaron; Reich, Daniel S; Prince, Jerry L; Pham, Dzung L
2015-09-01
Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.
Kalpathy-Cramer, Jayashree; Awan, Musaddiq; Bedrick, Steven; Rasch, Coen R N; Rosenthal, David I; Fuller, Clifton D
2014-02-01
Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.
Quantitative analysis of the chromatin of lymphocytes: an assay on comparative structuralism.
Meyer, F
1980-01-01
With 26 letters we can form all the words we use, and with a few words it is possible to form an infinite number of different meaningful sentences. In our case, the letters will be a few simple neighborhood image transformations and area measurements. The paper shows how, by iterating these transformations, it is possible to obtain a good quantitative description of the nuclear structure of Feulgen-stained lymphocytes (CLL and normal). The fact that we restricted ourselves to a small number of image transformations made it possible to construct an image analysis system (TAS) able to do these transformations very quickly. We will see, successively, how to segment the nucleus itself, the chromatin, and the interchromatinic channels, how openings and closings lead to size and spatial distribution curves, and how skeletons may be used for measuring the lengths of interchromatinic channels.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
Quantification of esophageal wall thickness in CT using atlas-based segmentation technique
NASA Astrophysics Data System (ADS)
Wang, Jiahui; Kang, Min Kyu; Kligerman, Seth; Lu, Wei
2015-03-01
Esophageal wall thickness is an important predictor of esophageal cancer response to therapy. In this study, we developed a computerized pipeline for quantification of esophageal wall thickness using computerized tomography (CT). We first segmented the esophagus using a multi-atlas-based segmentation scheme. The esophagus in each atlas CT was manually segmented to create a label map. Using image registration, all of the atlases were aligned to the imaging space of the target CT. The deformation field from the registration was applied to the label maps to warp them to the target space. A weighted majority-voting label fusion was employed to create the segmentation of esophagus. Finally, we excluded the lumen from the esophagus using a threshold of -600 HU and measured the esophageal wall thickness. The developed method was tested on a dataset of 30 CT scans, including 15 esophageal cancer patients and 15 normal controls. The mean Dice similarity coefficient (DSC) and mean absolute distance (MAD) between the segmented esophagus and the reference standard were employed to evaluate the segmentation results. Our method achieved a mean Dice coefficient of 65.55 ± 10.48% and mean MAD of 1.40 ± 1.31 mm for all the cases. The mean esophageal wall thickness of cancer patients and normal controls was 6.35 ± 1.19 mm and 6.03 ± 0.51 mm, respectively. We conclude that the proposed method can perform quantitative analysis of esophageal wall thickness and would be useful for tumor detection and tumor response evaluation of esophageal cancer.
The Marriage of Two Opposing Cultures
ERIC Educational Resources Information Center
Loubriel, Luis
2007-01-01
With a heavy dominance on its technical/empirical aspects, the segmented performance, pedagogy, and assessment of Western classical music is undermining its goal of creating art with precision, style, and expressive beauty. This segmentation has its roots in the quantitative assessment processes found in music education and in the note-perfect…
Fu, Xin; Yuan, Jun
2017-07-24
Coherent x-ray diffraction investigations on Ag five-fold twinned nanowires (FTNWs) have drawn controversial conclusions concerning whether the intrinsic 7.35° angular gap could be compensated homogeneously through phase transformation or inhomogeneously by forming disclination strain field. In those studies, the x-ray techniques only provided an ensemble average of the structural information from all the Ag nanowires. Here, using three-dimensional (3D) electron diffraction mapping approach, we non-destructively explore the cross-sectional strain and the related strain-relief defect structures of an individual Ag FTNW with diameter about 30 nm. The quantitative analysis of the fine structure of intensity distribution combining with kinematic electron diffraction simulation confirms that for such a Ag FTNW, the intrinsic 7.35° angular deficiency results in an inhomogeneous strain field within each single crystalline segment consistent with the disclination model of stress-relief. Moreover, the five crystalline segments are found to be strained differently. Modeling analysis in combination with system energy calculation further indicates that the elastic strain energy within some crystalline segments, could be partially relieved by the creation of stacking fault layers near the twin boundaries. Our study demonstrates that 3D electron diffraction mapping is a powerful tool for the cross-sectional strain analysis of complex 1D nanostructures.
NASA Astrophysics Data System (ADS)
Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi
2013-02-01
The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.
Semi-automated brain tumor and edema segmentation using MRI.
Xie, Kai; Yang, Jie; Zhang, Z G; Zhu, Y M
2005-10-01
Manual segmentation of brain tumors from magnetic resonance images is a challenging and time-consuming task. A semi-automated method has been developed for brain tumor and edema segmentation that will provide objective, reproducible segmentations that are close to the manual results. Additionally, the method segments non-enhancing brain tumor and edema from healthy tissues in magnetic resonance images. In this study, a semi-automated method was developed for brain tumor and edema segmentation and volume measurement using magnetic resonance imaging (MRI). Some novel algorithms for tumor segmentation from MRI were integrated in this medical diagnosis system. We exploit a hybrid level set (HLS) segmentation method driven by region and boundary information simultaneously, region information serves as a propagation force which is robust and boundary information serves as a stopping functional which is accurate. Ten different patients with brain tumors of different size, shape and location were selected, a total of 246 axial tumor-containing slices obtained from 10 patients were used to evaluate the effectiveness of segmentation methods. This method was applied to 10 non-enhancing brain tumors and satisfactory results were achieved. Two quantitative measures for tumor segmentation quality estimation, namely, correspondence ratio (CR) and percent matching (PM), were performed. For the segmentation of brain tumor, the volume total PM varies from 79.12 to 93.25% with the mean of 85.67+/-4.38% while the volume total CR varies from 0.74 to 0.91 with the mean of 0.84+/-0.07. For the segmentation of edema, the volume total PM varies from 72.86 to 87.29% with the mean of 79.54+/-4.18% while the volume total CR varies from 0.69 to 0.85 with the mean of 0.79+/-0.08. The HLS segmentation method perform better than the classical level sets (LS) segmentation method in PM and CR. The results of this research may have potential applications, both as a staging procedure and a method of evaluating tumor response during treatment, this method can be used as a clinical image analysis tool for doctors or radiologists.
Merlos, Pilar; López-Lereu, Maria P; Monmeneu, Jose V; Sanchis, Juan; Núñez, Julio; Bonanad, Clara; Valero, Ernesto; Miñana, Gema; Chaustre, Fabián; Gómez, Cristina; Oltra, Ricardo; Palacios, Lorena; Bosch, Maria J; Navarro, Vicente; Llácer, Angel; Chorro, Francisco J; Bodí, Vicente
2013-08-01
A variety of cardiac magnetic resonance indexes predict mid-term prognosis in ST-segment elevation myocardial infarction patients. The extent of transmural necrosis permits simple and accurate prediction of systolic recovery. However, its long-term prognostic value beyond a comprehensive clinical and cardiac magnetic resonance evaluation is unknown. We hypothesized that a simple semiquantitative assessment of the extent of transmural necrosis is the best resonance index to predict long-term outcome soon after a first ST-segment elevation myocardial infarction. One week after a first ST-segment elevation myocardial infarction we carried out a comprehensive quantification of several resonance parameters in 206 consecutive patients. A semiquantitative assessment (altered number of segments in the 17-segment model) of edema, baseline and post-dobutamine wall motion abnormalities, first pass perfusion, microvascular obstruction, and the extent of transmural necrosis was also performed. During follow-up (median 51 months), 29 patients suffered a major adverse cardiac event (8 cardiac deaths, 11 nonfatal myocardial infarctions, and 10 readmissions for heart failure). Major cardiac events were associated with more severely altered quantitative and semiquantitative resonance indexes. After a comprehensive multivariate adjustment, the extent of transmural necrosis was the only resonance index independently related to the major cardiac event rate (hazard ratio=1.34 [1.19-1.51] per each additional segment displaying>50% transmural necrosis, P<.001). A simple and non-time consuming semiquantitative analysis of the extent of transmural necrosis is the most powerful cardiac magnetic resonance index to predict long-term outcome soon after a first ST-segment elevation myocardial infarction. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
An improved approach for the segmentation of starch granules in microscopic images
2010-01-01
Background Starches are the main storage polysaccharides in plants and are distributed widely throughout plants including seeds, roots, tubers, leaves, stems and so on. Currently, microscopic observation is one of the most important ways to investigate and analyze the structure of starches. The position, shape, and size of the starch granules are the main measurements for quantitative analysis. In order to obtain these measurements, segmentation of starch granules from the background is very important. However, automatic segmentation of starch granules is still a challenging task because of the limitation of imaging condition and the complex scenarios of overlapping granules. Results We propose a novel method to segment starch granules in microscopic images. In the proposed method, we first separate starch granules from background using automatic thresholding and then roughly segment the image using watershed algorithm. In order to reduce the oversegmentation in watershed algorithm, we use the roundness of each segment, and analyze the gradient vector field to find the critical points so as to identify oversegments. After oversegments are found, we extract the features, such as the position and intensity of the oversegments, and use fuzzy c-means clustering to merge the oversegments to the objects with similar features. Experimental results demonstrate that the proposed method can alleviate oversegmentation of watershed segmentation algorithm successfully. Conclusions We present a new scheme for starch granules segmentation. The proposed scheme aims to alleviate the oversegmentation in watershed algorithm. We use the shape information and critical points of gradient vector flow (GVF) of starch granules to identify oversegments, and use fuzzy c-mean clustering based on prior knowledge to merge these oversegments to the objects. Experimental results on twenty microscopic starch images demonstrate the effectiveness of the proposed scheme. PMID:21047380
Automated segmentation of serous pigment epithelium detachment in SD-OCT images
NASA Astrophysics Data System (ADS)
Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian
2015-03-01
Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.
Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation
NASA Astrophysics Data System (ADS)
Sakamoto, M.; Honda, Y.; Kondo, A.
2016-06-01
From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.
Incorporation of physical constraints in optimal surface search for renal cortex segmentation
NASA Astrophysics Data System (ADS)
Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie
2012-02-01
In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.
Variational-based segmentation of bio-pores in tomographic images
NASA Astrophysics Data System (ADS)
Bauer, Benjamin; Cai, Xiaohao; Peth, Stephan; Schladitz, Katja; Steidl, Gabriele
2017-01-01
X-ray computed tomography (CT) combined with a quantitative analysis of the resulting volume images is a fruitful technique in soil science. However, the variations in X-ray attenuation due to different soil components keep the segmentation of single components within these highly heterogeneous samples a challenging problem. Particularly demanding are bio-pores due to their elongated shape and the low gray value difference to the surrounding soil structure. Recently, variational models in connection with algorithms from convex optimization were successfully applied for image segmentation. In this paper we apply these methods for the first time for the segmentation of bio-pores in CT images of soil samples. We introduce a novel convex model which enforces smooth boundaries of bio-pores and takes the varying attenuation values in the depth into account. Segmentation results are reported for different real-world 3D data sets as well as for simulated data. These results are compared with two gray value thresholding methods, namely indicator kriging and a global thresholding procedure, and with a morphological approach. Pros and cons of the methods are assessed by considering geometric features of the segmented bio-pore systems. The variational approach features well-connected smooth pores while not detecting smaller or shallower pores. This is an advantage in cases where the main bio-pores network is of interest and where infillings, e.g., excrements of earthworms, would result in losing pore connections as observed for the other thresholding methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suresh, Niraj; Stephens, Sean A.; Adams, Lexor
Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to climate change and forest management. Quantitative size information on roots in their native environment is invaluable for studying root growth and environmental processes involving the plant. X ray computed tomography (XCT) has been demonstrated to be an effective tool for in situ root scanning and analysis. Our group at the Environmental Molecular Sciences Laboratory (EMSL) has developed an XCT-based tool to image and quantitatively analyze plant root structures in their native soil environment. XCT data collected on amore » Prairie dropseed (Sporobolus heterolepis) specimen was used to visualize its root structure. A combination of open-source software RooTrak and DDV were employed to segment the root from the soil, and calculate its isosurface, respectively. Our own computer script named 3DRoot-SV was developed and used to calculate root volume and surface area from a triangular mesh. The process utilizing a unique combination of tools, from imaging to quantitative root analysis, including the 3DRoot-SV computer script, is described.« less
Whittaker, Heather T; Zhu, Shenghua; Di Curzio, Domenico L; Buist, Richard; Li, Xin-Min; Noy, Suzanna; Wiseman, Frances K; Thiessen, Jonathan D; Martin, Melanie
2018-07-01
Alzheimer's disease (AD) pathology causes microstructural changes in the brain. These changes, if quantified with magnetic resonance imaging (MRI), could be studied for use as an early biomarker for AD. The aim of our study was to determine if T 1 relaxation, diffusion tensor imaging (DTI), and quantitative magnetization transfer imaging (qMTI) metrics could reveal changes within the hippocampus and surrounding white matter structures in ex vivo transgenic mouse brains overexpressing human amyloid precursor protein with the Swedish mutation. Delineation of hippocampal cell layers using DTI color maps allows more detailed analysis of T 1 -weighted imaging, DTI, and qMTI metrics, compared with segmentation of gross anatomy based on relaxation images, and with analysis of DTI or qMTI metrics alone. These alterations are observed in the absence of robust intracellular Aβ accumulation or plaque deposition as revealed by histology. This work demonstrates that multiparametric quantitative MRI methods are useful for characterizing changes within the hippocampal substructures and surrounding white matter tracts of mouse models of AD. Copyright © 2018. Published by Elsevier Inc.
Sawa, Mitsuru
2011-03-01
1. Slit-lamp microscopy is a principal ophthalmic clinical method, because it provides microscopic findings of the anterior segment of the eye noninvasively. Its findings, however, are qualitative and there are large inter-observer variations in their evaluation. Furthermore, slit-lamp microscopy provides morphological findings, but a functional evaluation is difficult. We developed two novel methods that establish a qualitative methodology of the slit-lamp microscope and the pathophysiology of the anterior segment of the eye. One is the flare-cell photometer to evaluate flare and cells in the aqueous humor of the eye and the other is an immunohistochemical examination method using tear fluid to evaluate ocular surface disorders. The comprehensive evaluation of these studies is herein overviewed. 2. INNOVATION OF THE FLARE-CELL PHOTOMETER AND ITS CLINICAL SIGNIFICANCE: The breakdown of the blood-aqueous barrier (BAB) causes an increase in protein (flare) and leakage of blood cells (cell) into the aqueous humor of the eye and the severity of BAB breakdown has a positive correlation with the intensity of flare and cells. The flare and cells in the aqueous can be observed qualitatively by slit-lamp microscopy. These findings are primarily distinguished in optics by light scattering. Therefore, detection of the intensity of light scattering due to flare and cells can evaluate the BAB function. The flare-cell photometer comprises 3 novel components: a laser beam system as an incident light, a photomultiplier to detect scattered light intensity and a computer-assisted system to operate the whole system and analyze detected scattered light signals due to flare and cells. The instrument enables us to quantitatively analyze the flare and cells non-invasively and accurately with a wide dynamic measurement range, resulting in a repeated examination of each individual case. It also enables the evaluation of inflammation in the aqueous not only postoperatively but also in endogenous uveitis, evaluation of the effects of anti-inflammatory drugs on BAB and evaluation of aqueous humor dynamics. Furthermore, repeating the examination can minimize inter-individual variations and reduce the number of animals in animal experiments. 3. Sampling of tears can be performed noninvasively, but the obtainable volume is limited. Therefore, a determination of targeting biomarkers and a development of their micro-volume analysis methods play a crucial role in pathophysiological studies of the ocular surface. Targeting biomarkers should be determined according to the various specified bioactive substances such as eosinophil cationic protein (ECP), cytokines and others. A number of microvolume analysis methods, such as chemiluminescent enzyme immunoassay, immunochromatography, micro-array system and polymerase chain reaction method are used. Objective disorders in the studies include allergic conjunctivitis and infectious diseases such as herpetic keratitis. Quantitative evaluation methods for ECP concentration, antigen-specific secretory IgA in allergic diseases and herpetic keratitis, herpes simplex virus-DNA and cytokine and chemokine profile in tear fluid sampled by filter paper method were investigated. We developed a clinically applicable quantitative immunochemical method for ECP concentration in tear fluid. The results revealed that tear fluid analysis using the above mentioned methods is a clinically useful to investigate the pathophysiology of the ocular surface. 4. Laser flare-cell photometer and tear fluid analysis are potent clinical quantitative methods to investigate the pathophysiology of the anterior segment of the eye.
Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh
2013-01-01
In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800
Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina
2012-05-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.
Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina
2012-01-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602
An analysis of a discrete complex skill using Bernstein's stages of learning.
Smith, D R; McCabe, D R; Wilkerson, J D
2001-08-01
The purpose of this study was to provide quantitative data about changes in coordination after practicing a racquetball forehand drive serve. Novice women (N = 10) were videotaped before and after 10 min. of practicing a racquetball forehand drive serve on Day 1, and after 10-min. practice sessions on consecutive Days 2 through 5. The PEAK5 Motion Measurement System was used to evaluate the following dependent variables: (a) range of motion of the wrist, elbow, upper torso, and pelvis from backswing to ball contact: (b) racket head velocity at ball contact; and (c) coordination. Coordination was evaluated based on analysis of the angular velocity graphs of each performance to assess sequencing and timing of the segmental contributions. Shared positive contribution was assessed between adjacent 2-segment combinations: pelvis-torso and elbow-wrist. A repeated-measures analysis of variance indicated racket velocity, pelvic rotation, and upper torso rotation significantly increased over the 5 days of practice. Although participants increased their pelvic and torso ranges of motion and racket velocity, improvement in coordination was not documented.
Analysis of Alternatives for Dismantling of the Equipment in Building 117/1 at Ignalina NPP - 13278
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poskas, Povilas; Simonis, Audrius; Poskas, Gintautas
2013-07-01
Ignalina NPP was operating two RBMK-1500 reactors which are under decommissioning now. In this paper dismantling alternatives of the equipment in Building 117/1 are analyzed. After situation analysis and collection of the primary information related to components' physical and radiological characteristics, location and other data, two different alternatives for dismantling of the equipment are formulated - the first (A1), when major components (vessels and pipes of Emergency Core Cooling System - ECCS) are segmented/halved in situ using flame cutting (oxy-acetylene) and the second one (A2), when these components are segmented/halved at the workshop using CAMC (Contact Arc Metal Cutting) technique.more » To select the preferable alternative MCDA method - AHP (Analytic Hierarchy Process) is applied. Hierarchical list of decision criteria, necessary for assessment of alternatives performance, are formulated. Quantitative decision criteria values for these alternatives are calculated using software DECRAD, which was developed by Lithuanian Energy Institute Nuclear engineering laboratory. While qualitative decision criteria are evaluated using expert judgment. Analysis results show that alternative A1 is better than alternative A2. (authors)« less
Carranco, Núria; Farrés-Cebrián, Mireia; Saurina, Javier
2018-01-01
High performance liquid chromatography method with ultra-violet detection (HPLC-UV) fingerprinting was applied for the analysis and characterization of olive oils, and was performed using a Zorbax Eclipse XDB-C8 reversed-phase column under gradient elution, employing 0.1% formic acid aqueous solution and methanol as mobile phase. More than 130 edible oils, including monovarietal extra-virgin olive oils (EVOOs) and other vegetable oils, were analyzed. Principal component analysis results showed a noticeable discrimination between olive oils and other vegetable oils using raw HPLC-UV chromatographic profiles as data descriptors. However, selected HPLC-UV chromatographic time-window segments were necessary to achieve discrimination among monovarietal EVOOs. Partial least square (PLS) regression was employed to tackle olive oil authentication of Arbequina EVOO adulterated with Picual EVOO, a refined olive oil, and sunflower oil. Highly satisfactory results were obtained after PLS analysis, with overall errors in the quantitation of adulteration in the Arbequina EVOO (minimum 2.5% adulterant) below 2.9%. PMID:29561820
Integrated Quantitative Transcriptome Maps of Human Trisomy 21 Tissues and Cells
Pelleri, Maria Chiara; Cattani, Chiara; Vitale, Lorenza; Antonaros, Francesca; Strippoli, Pierluigi; Locatelli, Chiara; Cocchi, Guido; Piovesan, Allison; Caracausi, Maria
2018-01-01
Down syndrome (DS) is due to the presence of an extra full or partial chromosome 21 (Hsa21). The identification of genes contributing to DS pathogenesis could be the key to any rational therapy of the associated intellectual disability. We aim at generating quantitative transcriptome maps in DS integrating all gene expression profile datasets available for any cell type or tissue, to obtain a complete model of the transcriptome in terms of both expression values for each gene and segmental trend of gene expression along each chromosome. We used the TRAM (Transcriptome Mapper) software for this meta-analysis, comparing transcript expression levels and profiles between DS and normal brain, lymphoblastoid cell lines, blood cells, fibroblasts, thymus and induced pluripotent stem cells, respectively. TRAM combined, normalized, and integrated datasets from different sources and across diverse experimental platforms. The main output was a linear expression value that may be used as a reference for each of up to 37,181 mapped transcripts analyzed, related to both known genes and expression sequence tag (EST) clusters. An independent example in vitro validation of fibroblast transcriptome map data was performed through “Real-Time” reverse transcription polymerase chain reaction showing an excellent correlation coefficient (r = 0.93, p < 0.0001) with data obtained in silico. The availability of linear expression values for each gene allowed the testing of the gene dosage hypothesis of the expected 3:2 DS/normal ratio for Hsa21 as well as other human genes in DS, in addition to listing genes differentially expressed with statistical significance. Although a fraction of Hsa21 genes escapes dosage effects, Hsa21 genes are selectively over-expressed in DS samples compared to genes from other chromosomes, reflecting a decisive role in the pathogenesis of the syndrome. Finally, the analysis of chromosomal segments reveals a high prevalence of Hsa21 over-expressed segments over the other genomic regions, suggesting, in particular, a specific region on Hsa21 that appears to be frequently over-expressed (21q22). Our complete datasets are released as a new framework to investigate transcription in DS for individual genes as well as chromosomal segments in different cell types and tissues. PMID:29740474
High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.
Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C
2007-10-09
High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.
Gómez-Ríos, Germán Augusto; Gionfriddo, Emanuela; Poole, Justen; Pawliszyn, Janusz
2017-07-05
The direct interface of microextraction technologies to mass spectrometry (MS) has unquestionably revolutionized the speed and efficacy at which complex matrices are analyzed. Solid Phase Micro Extraction-Transmission Mode (SPME-TM) is a technology conceived as an effective synergy between sample preparation and ambient ionization. Succinctly, the device consists of a mesh coated with polymeric particles that extracts analytes of interest present in a given sample matrix. This coated mesh acts as a transmission-mode substrate for Direct Analysis in Real Time (DART), allowing for rapid and efficient thermal desorption/ionization of analytes previously concentrated on the coating, and dramatically lowering the limits of detection attained by sole DART analysis. In this study, we present SPME-TM as a novel tool for the ultrafast enrichment of pesticides present in food and environmental matrices and their quantitative determination by MS via DART ionization. Limits of quantitation in the subnanogram per milliliter range can be attained, while total analysis time does not exceed 2 min per sample. In addition to target information obtained via tandem MS, retrospective studies of the same sample via high-resolution mass spectrometry (HRMS) were accomplished by thermally desorbing a different segment of the microextraction device.
Sutton, Robert M; Niles, Dana; Nysaether, Jon; Abella, Benjamin S; Arbogast, Kristy B; Nishisaki, Akira; Maltese, Matthew R; Donoghue, Aaron; Bishnoi, Ram; Helfaer, Mark A; Myklebust, Helge; Nadkarni, Vinay
2009-08-01
Few data exist on pediatric cardiopulmonary resuscitation (CPR) quality. This study is the first to evaluate actual in-hospital pediatric CPR. We hypothesized that with bedside CPR training and corrective feedback, CPR quality can approach American Heart Association (AHA) targets. Using CPR recording/feedback defibrillators, quality of CPR was assessed for patients >or=8 years of age who suffered a cardiac arrest in the PICU or emergency department (ED). Before and during the study, a bedside CPR training program was initiated. Between October 2006 and February 2008, twenty events in 18 patients met inclusion criteria and resulted in 36749 evaluable chest compressions (CCs) during 392.3 minutes of arrest. CCs were shallow (<38 mm or <1.5 in) in 27.2% (9998 of 36749), with excessive residual leaning force (>or=2500 g) in 23.4% (8611 of 36749). Segmental analysis of the first 5 minutes of the events demonstrated that shallow CCs and excessive residual leaning force were less prevalent during the first 5 minutes. AHA targets were not achieved for CC rate in 62 (43.1%) of 144 segments, CC depth in 52 (36.1%) of 144 segments, and residual leaning force in 53 (36.8%) of 144 segments. This prospective, observational study demonstrates feasibility of monitoring in-hospital pediatric CPR. Even with bedside CPR retraining and corrective audiovisual feedback, CPR quality frequently did not meet AHA targets. Importantly, no flow fraction target of 10% was achieved. Future studies should investigate novel educational methods and targeted feedback technologies.
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559
Fully automatic segmentation of white matter hyperintensities in MR images of the elderly.
Admiraal-Behloul, F; van den Heuvel, D M J; Olofsen, H; van Osch, M J P; van der Grond, J; van Buchem, M A; Reiber, J H C
2005-11-15
The role of quantitative image analysis in large clinical trials is continuously increasing. Several methods are available for performing white matter hyperintensity (WMH) volume quantification. They vary in the amount of the human interaction involved. In this paper, we describe a fully automatic segmentation that was used to quantify WMHs in a large clinical trial on elderly subjects. Our segmentation method combines information from 3 different MR images: proton density (PD), T2-weighted and fluid-attenuated inversion recovery (FLAIR) images; our method uses an established artificial intelligent technique (fuzzy inference system) and does not require extensive computations. The reproducibility of the segmentation was evaluated in 9 patients who underwent scan-rescan with repositioning; an inter-class correlation coefficient (ICC) of 0.91 was obtained. The effect of differences in image resolution was tested in 44 patients, scanned with 6- and 3-mm slice thickness FLAIR images; we obtained an ICC value of 0.99. The accuracy of the segmentation was evaluated on 100 patients for whom manual delineation of WMHs was available; the obtained ICC was 0.98 and the similarity index was 0.75. Besides the fact that the approach demonstrated very high volumetric and spatial agreement with expert delineation, the software did not require more than 2 min per patient (from loading the images to saving the results) on a Pentium-4 processor (512 MB RAM).
Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry
NASA Astrophysics Data System (ADS)
Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2016-03-01
Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.
Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry.
Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2016-03-22
Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.
What limits the achievable areal densities of large aperture space telescopes?
NASA Astrophysics Data System (ADS)
Peterson, Lee D.; Hinkle, Jason D.
2005-08-01
This paper examines requirements trades involving areal density for large space telescope mirrors. A segmented mirror architecture is used to define a quantitative example that leads to relevant insight about the trades. In this architecture, the mirror consists of segments of non-structural optical elements held in place by a structural truss that rests behind the segments. An analysis is presented of the driving design requirements for typical on-orbit loads and ground-test loads. It is shown that the driving on-orbit load would be the resonance of the lowest mode of the mirror by a reaction wheel static unbalance. The driving ground-test load would be dynamics due to ground-induced random vibration. Two general conclusions are derived from these results. First, the areal density that can be allocated to the segments depends on the depth allocated to the structure. More depth in the structure allows the allocation of more mass to the segments. This, however, leads to large structural depth that might be a significant development challenge. Second, the requirement for ground-test-ability results in an order of magnitude or more depth in the structure than is required by the on-orbit loads. This leads to the proposition that avoiding ground test as a driving requirement should be a fundamental technology on par with the provision of deployable depth. Both are important structural challenges for these future systems.
Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry
Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2016-01-01
Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83–0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments. PMID:27001047
Norman, Berk; Pedoia, Valentina; Majumdar, Sharmila
2018-03-27
Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1 ρ -weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. © RSNA, 2018 Online supplemental material is available for this article.
Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A
2017-09-01
Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.
Automated unsupervised multi-parametric classification of adipose tissue depots in skeletal muscle
Valentinitsch, Alexander; Karampinos, Dimitrios C.; Alizai, Hamza; Subburaj, Karupppasamy; Kumar, Deepak; Link, Thomas M.; Majumdar, Sharmila
2012-01-01
Purpose To introduce and validate an automated unsupervised multi-parametric method for segmentation of the subcutaneous fat and muscle regions in order to determine subcutaneous adipose tissue (SAT) and intermuscular adipose tissue (IMAT) areas based on data from a quantitative chemical shift-based water-fat separation approach. Materials and Methods Unsupervised standard k-means clustering was employed to define sets of similar features (k = 2) within the whole multi-modal image after the water-fat separation. The automated image processing chain was composed of three primary stages including tissue, muscle and bone region segmentation. The algorithm was applied on calf and thigh datasets to compute SAT and IMAT areas and was compared to a manual segmentation. Results The IMAT area using the automatic segmentation had excellent agreement with the IMAT area using the manual segmentation for all the cases in the thigh (R2: 0.96) and for cases with up to moderate IMAT area in the calf (R2: 0.92). The group with the highest grade of muscle fat infiltration in the calf had the highest error in the inner SAT contour calculation. Conclusion The proposed multi-parametric segmentation approach combined with quantitative water-fat imaging provides an accurate and reliable method for an automated calculation of the SAT and IMAT areas reducing considerably the total post-processing time. PMID:23097409
NASA Astrophysics Data System (ADS)
Titschack, J.; Baum, D.; Matsuyama, K.; Boos, K.; Färber, C.; Kahl, W.-A.; Ehrig, K.; Meinel, D.; Soriano, C.; Stock, S. R.
2018-06-01
During the last decades, X-ray (micro-)computed tomography has gained increasing attention for the description of porous skeletal and shell structures of various organism groups. However, their quantitative analysis is often hampered by the difficulty to discriminate cavities and pores within the object from the surrounding region. Herein, we test the ambient occlusion (AO) algorithm and newly implemented optimisations for the segmentation of cavities (implemented in the software Amira). The segmentation accuracy is evaluated as a function of (i) changes in the ray length input variable, and (ii) the usage of AO (scalar) field and other AO-derived (scalar) fields. The results clearly indicate that the AO field itself outperforms all other AO-derived fields in terms of segmentation accuracy and robustness against variations in the ray length input variable. The newly implemented optimisations improved the AO field-based segmentation only slightly, while the segmentations based on the AO-derived fields improved considerably. Additionally, we evaluated the potential of the AO field and AO-derived fields for the separation and classification of cavities as well as skeletal structures by comparing them with commonly used distance-map-based segmentations. For this, we tested the zooid separation within a bryozoan colony, the stereom classification of an ophiuroid tooth, the separation of bioerosion traces within a marble block and the calice (central cavity)-pore separation within a dendrophyllid coral. The obtained results clearly indicate that the ideal input field depends on the three-dimensional morphology of the object of interest. The segmentations based on the AO-derived fields often provided cavity separations and skeleton classifications that were superior to or impossible to obtain with commonly used distance-map-based segmentations. The combined usage of various AO-derived fields by supervised or unsupervised segmentation algorithms might provide a promising target for future research to further improve the results for this kind of high-end data segmentation and classification. Furthermore, the application of the developed segmentation algorithm is not restricted to X-ray (micro-)computed tomographic data but may potentially be useful for the segmentation of 3D volume data from other sources.
Ahlers, C; Simader, C; Geitzenauer, W; Stock, G; Stetson, P; Dastmalchi, S; Schmidt-Erfurth, U
2008-02-01
A limited number of scans compromise conventional optical coherence tomography (OCT) to track chorioretinal disease in its full extension. Failures in edge-detection algorithms falsify the results of retinal mapping even further. High-definition-OCT (HD-OCT) is based on raster scanning and was used to visualise the localisation and volume of intra- and sub-pigment-epithelial (RPE) changes in fibrovascular pigment epithelial detachments (fPED). Two different scanning patterns were evaluated. 22 eyes with fPED were imaged using a frequency-domain, high-speed prototype of the Cirrus HD-OCT. The axial resolution was 6 mum, and the scanning speed was 25 kA scans/s. Two different scanning patterns covering an area of 6 x 6 mm in the macular retina were compared. Three-dimensional topographic reconstructions and volume calculations were performed using MATLAB-based automatic segmentation software. Detailed information about layer-specific distribution of fluid accumulation and volumetric measurements can be obtained for retinal- and sub-RPE volumes. Both raster scans show a high correlation (p<0.01; R2>0.89) of measured values, that is PED volume/area, retinal volume and mean retinal thickness. Quality control of the automatic segmentation revealed reasonable results in over 90% of the examinations. Automatic segmentation allows for detailed quantitative and topographic analysis of the RPE and the overlying retina. In fPED, the 128 x 512 scanning-pattern shows mild advantages when compared with the 256 x 256 scan. Together with the ability for automatic segmentation, HD-OCT clearly improves the clinical monitoring of chorioretinal disease by adding relevant new parameters. HD-OCT is likely capable of enhancing the understanding of pathophysiology and benefits of treatment for current anti-CNV strategies in future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zellars, Richard, E-mail: zellari@jhmi.edu; Bravo, Paco E.; Tryggestad, Erik
2014-03-15
Purpose: Cardiac muscle perfusion, as determined by single-photon emission computed tomography (SPECT), decreases after breast and/or chest wall (BCW) irradiation. The active breathing coordinator (ABC) enables radiation delivery when the BCW is farther from the heart, thereby decreasing cardiac exposure. We hypothesized that ABC would prevent radiation-induced cardiac toxicity and conducted a randomized controlled trial evaluating myocardial perfusion changes after radiation for left-sided breast cancer with or without ABC. Methods and Materials: Stages I to III left breast cancer patients requiring adjuvant radiation therapy (XRT) were randomized to ABC or No-ABC. Myocardial perfusion was evaluated by SPECT scans (before andmore » 6 months after BCW radiation) using 2 methods: (1) fully automated quantitative polar mapping; and (2) semiquantitative visual assessment. The left ventricle was divided into 20 segments for the polar map and 17 segments for the visual method. Segments were grouped by anatomical rings (apical, mid, basal) or by coronary artery distribution. For the visual method, 2 nuclear medicine physicians, blinded to treatment groups, scored each segment's perfusion. Scores were analyzed with nonparametric tests and linear regression. Results: Between 2006 and 2010, 57 patients were enrolled and 43 were available for analysis. The cohorts were well matched. The apical and left anterior descending coronary artery segments had significant decreases in perfusion on SPECT scans in both ABC and No-ABC cohorts. In unadjusted and adjusted analyses, controlling for pretreatment perfusion score, age, and chemotherapy, ABC was not significantly associated with prevention of perfusion deficits. Conclusions: In this randomized controlled trial, ABC does not appear to prevent radiation-induced cardiac perfusion deficits.« less
Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario
2017-06-01
The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.
Mazzaferri, Javier; Larrivée, Bruno; Cakir, Bertan; Sapieha, Przemyslaw; Costantino, Santiago
2018-03-02
Preclinical studies of vascular retinal diseases rely on the assessment of developmental dystrophies in the oxygen induced retinopathy rodent model. The quantification of vessel tufts and avascular regions is typically computed manually from flat mounted retinas imaged using fluorescent probes that highlight the vascular network. Such manual measurements are time-consuming and hampered by user variability and bias, thus a rapid and objective method is needed. Here, we introduce a machine learning approach to segment and characterize vascular tufts, delineate the whole vasculature network, and identify and analyze avascular regions. Our quantitative retinal vascular assessment (QuRVA) technique uses a simple machine learning method and morphological analysis to provide reliable computations of vascular density and pathological vascular tuft regions, devoid of user intervention within seconds. We demonstrate the high degree of error and variability of manual segmentations, and designed, coded, and implemented a set of algorithms to perform this task in a fully automated manner. We benchmark and validate the results of our analysis pipeline using the consensus of several manually curated segmentations using commonly used computer tools. The source code of our implementation is released under version 3 of the GNU General Public License ( https://www.mathworks.com/matlabcentral/fileexchange/65699-javimazzaf-qurva ).
Fast globally optimal segmentation of cells in fluorescence microscopy images.
Bergeest, Jan-Philip; Rohr, Karl
2011-01-01
Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.
Herrera, Victoria L M; Pasion, Khristine A; Moran, Ann Marie; Ruiz-Opazo, Nelson
2012-01-01
The detection of multiple sex-specific blood pressure (BP) quantitative trait loci (QTLs) in independent total genome analyses of F2 (Dahl S x R)-intercross male and female rat cohorts confirms clinical observations of sex-specific disease cause and response to treatment among hypertensive patients, and mandate the identification of sex-specific hypertension genes/mechanisms. We developed and studied two congenic strains, S.R5A and S.R5B introgressing Dahl R-chromosome 5 segments into Dahl S chromosome 5 region spanning putative BP-f1 and BP-f2 QTLs. Radiotelemetric non-stressed 24-hour BP analysis at four weeks post-high salt diet (8% NaCl) challenge, identified only S.R5B congenic rats with lower SBP (-26.5 mmHg, P = 0.002), DBP (-23.7 mmHg, P = 0.004) and MAP (-25.1 mmHg, P = 0.002) compared with Dahl S female controls at four months of age confirming BP-f1 but not BP-f2 QTL on rat chromosome 5. The S.R5B congenic segment did not affect pulse pressure and relative heart weight indicating that the gene underlying BP-f1 does not influence arterial stiffness and cardiac hypertrophy. The results of our congenic analysis narrowed BP-f1 to chromosome 5 coordinates 134.9-141.5 Mbp setting up the basis for further fine mapping of BP-f1 and eventual identification of the specific gene variant accounting for BP-f1 effect on blood pressure.
NASA Astrophysics Data System (ADS)
Bhattacharya, Debanjali; Sinha, Neelam; Saini, Jitender
2017-03-01
Multiple system atrophy (MSA) is a rare, non-curable, progressive neurodegenerative disorder that affects nervous system and movement, poses a considerable diagnostic challenge to medical researchers. Corpus callosum (CC) being the largest white matter structure in brain, enabling inter-hemispheric communication, quantification of callosal atrophy may provide vital information at the earliest possible stages. The main objective is to identify the differences in CC structure for this disease, based on quantitative analysis on the pattern of callosal atrophy. We report results of quantification of structural changes in regional anatomical thickness, area and length of CC between patient-groups with MSA with respect to healthy controls. The method utilizes isolating and parcellating the mid-sagittal CC into 100 segments along the length - measuring the width of each segment. It also measures areas within geometrically defined five callosal compartments of the well-known Witelson, and Hofer-Frahma schemes. For quantification, statistical tests are performed on these different callosal measurements. From the statistical analysis, it is concluded that compared to healthy controls, width is reduced drastically throughout CC for MSA group and as well as changes in area and length are also significant for MSA. The study is further extended to check if any significant difference in thickness is found between the two variations of MSA, Parkinsonian MSA and Cerebellar MSA group, using the same methodology. However area and length of this two sub-MSA group, no substantial difference is obtained. The study is performed on twenty subjects for each control and MSA group, who had T1-weighted MRI.
Camomilla, Valentina; Cereatti, Andrea; Cutti, Andrea Giovanni; Fantozzi, Silvia; Stagni, Rita; Vannozzi, Giuseppe
2017-08-18
Quantitative gait analysis can provide a description of joint kinematics and dynamics, and it is recognized as a clinically useful tool for functional assessment, diagnosis and intervention planning. Clinically interpretable parameters are estimated from quantitative measures (i.e. ground reaction forces, skin marker trajectories, etc.) through biomechanical modelling. In particular, the estimation of joint moments during motion is grounded on several modelling assumptions: (1) body segmental and joint kinematics is derived from the trajectories of markers and by modelling the human body as a kinematic chain; (2) joint resultant (net) loads are, usually, derived from force plate measurements through a model of segmental dynamics. Therefore, both measurement errors and modelling assumptions can affect the results, to an extent that also depends on the characteristics of the motor task analysed (i.e. gait speed). Errors affecting the trajectories of joint centres, the orientation of joint functional axes, the joint angular velocities, the accuracy of inertial parameters and force measurements (concurring to the definition of the dynamic model), can weigh differently in the estimation of clinically interpretable joint moments. Numerous studies addressed all these methodological aspects separately, but a critical analysis of how these aspects may affect the clinical interpretation of joint dynamics is still missing. This article aims at filling this gap through a systematic review of the literature, conducted on Web of Science, Scopus and PubMed. The final objective is hence to provide clear take-home messages to guide laboratories in the estimation of joint moments for the clinical practice.
VIPAR, a quantitative approach to 3D histopathology applied to lymphatic malformations
Hägerling, René; Drees, Dominik; Scherzinger, Aaron; Dierkes, Cathrin; Martin-Almedina, Silvia; Butz, Stefan; Gordon, Kristiana; Schäfers, Michael; Hinrichs, Klaus; Vestweber, Dietmar; Goerge, Tobias; Mansour, Sahar; Mortimer, Peter S.
2017-01-01
BACKGROUND. Lack of investigatory and diagnostic tools has been a major contributing factor to the failure to mechanistically understand lymphedema and other lymphatic disorders in order to develop effective drug and surgical therapies. One difficulty has been understanding the true changes in lymph vessel pathology from standard 2D tissue sections. METHODS. VIPAR (volume information-based histopathological analysis by 3D reconstruction and data extraction), a light-sheet microscopy–based approach for the analysis of tissue biopsies, is based on digital reconstruction and visualization of microscopic image stacks. VIPAR allows semiautomated segmentation of the vasculature and subsequent nonbiased extraction of characteristic vessel shape and connectivity parameters. We applied VIPAR to analyze biopsies from healthy lymphedematous and lymphangiomatous skin. RESULTS. Digital 3D reconstruction provided a directly visually interpretable, comprehensive representation of the lymphatic and blood vessels in the analyzed tissue volumes. The most conspicuous features were disrupted lymphatic vessels in lymphedematous skin and a hyperplasia (4.36-fold lymphatic vessel volume increase) in the lymphangiomatous skin. Both abnormalities were detected by the connectivity analysis based on extracted vessel shape and structure data. The quantitative evaluation of extracted data revealed a significant reduction of lymphatic segment length (51.3% and 54.2%) and straightness (89.2% and 83.7%) for lymphedematous and lymphangiomatous skin, respectively. Blood vessel length was significantly increased in the lymphangiomatous sample (239.3%). CONCLUSION. VIPAR is a volume-based tissue reconstruction data extraction and analysis approach that successfully distinguished healthy from lymphedematous and lymphangiomatous skin. Its application is not limited to the vascular systems or skin. FUNDING. Max Planck Society, DFG (SFB 656), and Cells-in-Motion Cluster of Excellence EXC 1003. PMID:28814672
VIPAR, a quantitative approach to 3D histopathology applied to lymphatic malformations.
Hägerling, René; Drees, Dominik; Scherzinger, Aaron; Dierkes, Cathrin; Martin-Almedina, Silvia; Butz, Stefan; Gordon, Kristiana; Schäfers, Michael; Hinrichs, Klaus; Ostergaard, Pia; Vestweber, Dietmar; Goerge, Tobias; Mansour, Sahar; Jiang, Xiaoyi; Mortimer, Peter S; Kiefer, Friedemann
2017-08-17
Lack of investigatory and diagnostic tools has been a major contributing factor to the failure to mechanistically understand lymphedema and other lymphatic disorders in order to develop effective drug and surgical therapies. One difficulty has been understanding the true changes in lymph vessel pathology from standard 2D tissue sections. VIPAR (volume information-based histopathological analysis by 3D reconstruction and data extraction), a light-sheet microscopy-based approach for the analysis of tissue biopsies, is based on digital reconstruction and visualization of microscopic image stacks. VIPAR allows semiautomated segmentation of the vasculature and subsequent nonbiased extraction of characteristic vessel shape and connectivity parameters. We applied VIPAR to analyze biopsies from healthy lymphedematous and lymphangiomatous skin. Digital 3D reconstruction provided a directly visually interpretable, comprehensive representation of the lymphatic and blood vessels in the analyzed tissue volumes. The most conspicuous features were disrupted lymphatic vessels in lymphedematous skin and a hyperplasia (4.36-fold lymphatic vessel volume increase) in the lymphangiomatous skin. Both abnormalities were detected by the connectivity analysis based on extracted vessel shape and structure data. The quantitative evaluation of extracted data revealed a significant reduction of lymphatic segment length (51.3% and 54.2%) and straightness (89.2% and 83.7%) for lymphedematous and lymphangiomatous skin, respectively. Blood vessel length was significantly increased in the lymphangiomatous sample (239.3%). VIPAR is a volume-based tissue reconstruction data extraction and analysis approach that successfully distinguished healthy from lymphedematous and lymphangiomatous skin. Its application is not limited to the vascular systems or skin. Max Planck Society, DFG (SFB 656), and Cells-in-Motion Cluster of Excellence EXC 1003.
Modeling 4D Pathological Changes by Leveraging Normative Models
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Saha, Avishek; Liu, Wei; Goh, S.Y. Matthew; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2016-01-01
With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject’s image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies. PMID:27818606
Canclini, S; Terzi, A; Rossini, P; Vignati, A; La Canna, G; Magri, G C; Pizzocaro, C; Giubbini, R
2001-01-01
Multigated radionuclide ventriculography (MUGA) is a simple and reliable tool for the assessment of global systolic and diastolic function and in several studies it is still considered a standard for the assessment of left ventricular ejection fraction. However the evaluation of regional wall motion by MUGA is critical due to two-dimensional imaging and its clinical use is progressively declining in favor of echocardiography. Tomographic MUGA (T-MUGA) is not widely adopted in clinical practice. The aim of this study was to compare T-MUGA to planar MUGA (P-MUGA) for the assessment of global ejection fraction and to transthoracic echocardiography for the evaluation of regional wall motion. A 16-segment model was adopted for the comparison with echo regional wall motion. For each one of the 16 segments the normal range of T-MUGA ejection fraction was quantified and a normal data file was defined; the average value -2.5 SD was used as the lower threshold to identify abnormal segments. In addition, amplitude images from Fourier analysis were quantified and considered abnormal according to three different thresholds (25, 50 and 75% of the maximum). In a study group of 33 consecutive patients the ejection fraction values of T-MUGA highly correlated with those of P-MUGA (r = 0.93). The regional ejection fraction (according to the normal database) and the amplitude analysis (50% threshold) allowed for the correct identification of 203/226 and 167/226 asynergic segments by echocardiography, and of 269/302 and 244/302 normal segments, respectively. Therefore sensitivity, specificity and overall accuracy to detect regional wall motion abnormalities were 90, 89, 89% and 74, 81, 79% for regional ejection fraction and amplitude analysis, respectively. T-MUGA is a reliable tool for regional wall motion evaluation, well correlated with echocardiography, less subjective and able to provide quantitative data.
Segmentation and Classification of Burn Color Images
2001-10-25
SEGMENTATION AND CLASSIFICATION OF BURN COLOR IMAGES Begoña Acha1, Carmen Serrano1, Laura Roa2 1Área de Teoría de la Señal y Comunicaciones ...2000, Las Vegas (USA), pp. 411-415. [21] G. Wyszecki and W.S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae (New
Effects of theobroxide, a natural product, on the level of endogenous jasmonoids.
Yang, Qing; Gao, Xiquan; Fujino, Yumiko; Matsuura, Hideyuki; Yoshihara, Teruhiko
2004-01-01
The natural potato microtuber inducing substance, theobroxide, strongly induces the formation of tuber of potato (Solanum tuberosum L.) and flower bud of morning glory (Pharbitis nil) plants under non-inducing conditions (long days) (Yoshihara et al., 2000). In the present study, theobroxide was evaluated for its effect on the level of endogenous jasmonoids in different tissues of such two plants. An in vitro bioassay using cultures of single-node segments of potato stems was performed with the supplement of theobroxide in the medium. The endogenous jasmonic acid (JA) and its analogue tuberonic acid (TA, 12-hydroxyjasmonic acid) in segments and microtubers were quantitatively analyzed. The increase in the endogenous JA level caused by theobroxide was observed in both segments and microtubers. Endogenous TA was only detected in segments, and the content increased with the concentration of theobroxide. As for morning glory, the whole plant was sprayed with theobroxide for 1 approximately 5 weeks under different photoperiods and endogenous JA in the leaves was quantitatively analyzed. Theobroxide spraying increased the level of endogenous JA in the leaves of the plants grown under both long and short days.
Li, Xianran; Tian, Feng; Huang, Haiyan; Tan, Lubin; Zhu, Zuofeng; Hu, Songnian; Sun, Chuanqing
2008-06-01
To facilitate cloning gene(s) underlying gpa7, a deep-coverage BAC library was constructed for an isolate of common wild rice (Oryza rufipogon Griff.) collected from Dongxiang, Jiangxi Province, China (DXCWR). gpa7, a quantitative trait locus corresponding to grain number per panicle, is positioned in the short arm of chromosome 7. The BAC library containing 96,768 clones represents approximate 18 haploid genome equivalents. The contig spanning DXCWR gpa7 was constructed with a series of ordered markers. The putative physical map near the gpa7 locus of another accession of O. rufipogon (Accession: IRGC 105491) was also isolated in silico. Analysis of the physical maps of gpa7 indicated that a segment of about 150 kb was deleted during domestication of common wild rice.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
TANGO: a generic tool for high-throughput 3D image analysis for studying nuclear organization.
Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas
2013-07-15
The cell nucleus is a highly organized cellular organelle that contains the genetic material. The study of nuclear architecture has become an important field of cellular biology. Extracting quantitative data from 3D fluorescence imaging helps understand the functions of different nuclear compartments. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here, we describe Tools for Analysis of Nuclear Genome Organization (TANGO), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a coherent framework allowing biologists to perform the complete analysis process of 3D fluorescence images by combining two environments: ImageJ (http://imagej.nih.gov/ij/) for image processing and quantitative analysis and R (http://cran.r-project.org) for statistical processing of measurement results. It includes an intuitive user interface providing the means to precisely build a segmentation procedure and set-up analyses, without possessing programming skills. TANGO is a versatile tool able to process large sets of images, allowing quantitative study of nuclear organization. TANGO is composed of two programs: (i) an ImageJ plug-in and (ii) a package (rtango) for R. They are both free and open source, available (http://biophysique.mnhn.fr/tango) for Linux, Microsoft Windows and Macintosh OSX. Distribution is under the GPL v.2 licence. thomas.boudier@snv.jussieu.fr Supplementary data are available at Bioinformatics online.
3D Slicer as an Image Computing Platform for the Quantitative Imaging Network
Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron
2012-01-01
Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690
Goel, Utsav O; Maddox, Michael M; Elfer, Katherine N; Dorsey, Philip J; Wang, Mei; McCaslin, Ian Ross; Brown, J Quincy; Lee, Benjamin R
2014-01-01
Reduction of warm ischemia time during partial nephrectomy (PN) is critical to minimizing ischemic damage and improving postoperative kidney function, while maintaining tumor resection efficacy. Recently, methods for localizing the effects of warm ischemia to the region of the tumor via selective clamping of higher-order segmental artery branches have been shown to have superior outcomes compared with clamping the main renal artery. However, artery identification can prolong operative time and increase the blood loss and reduce the positive effects of selective ischemia. Quantitative diffuse reflectance spectroscopy (DRS) can provide a convenient, real-time means to aid in artery identification during laparoscopic PN. The feasibility of quantitative DRS for real-time longitudinal measurement of tissue perfusion and vascular oxygenation in laparoscopic nephrectomy was investigated in vivo in six Yorkshire swine kidneys (n=three animals ). DRS allowed for rapid identification of ischemic areas after selective vessel occlusion. In addition, the rates of ischemia induction and recovery were compared for main renal artery versus tertiary segmental artery occlusion, and it was found that the tertiary segmental artery occlusion trends toward faster recovery after ischemia, which suggests a potential benefit of selective ischemia. Quantitative DRS could provide a convenient and fast tool for artery identification and evaluation of the depth, spatial extent, and duration of selective tissue ischemia in laparoscopic PN.
NASA Astrophysics Data System (ADS)
Goel, Utsav O.; Maddox, Michael M.; Elfer, Katherine N.; Dorsey, Philip J.; Wang, Mei; McCaslin, Ian Ross; Brown, J. Quincy; Lee, Benjamin R.
2014-10-01
Reduction of warm ischemia time during partial nephrectomy (PN) is critical to minimizing ischemic damage and improving postoperative kidney function, while maintaining tumor resection efficacy. Recently, methods for localizing the effects of warm ischemia to the region of the tumor via selective clamping of higher-order segmental artery branches have been shown to have superior outcomes compared with clamping the main renal artery. However, artery identification can prolong operative time and increase the blood loss and reduce the positive effects of selective ischemia. Quantitative diffuse reflectance spectroscopy (DRS) can provide a convenient, real-time means to aid in artery identification during laparoscopic PN. The feasibility of quantitative DRS for real-time longitudinal measurement of tissue perfusion and vascular oxygenation in laparoscopic nephrectomy was investigated in vivo in six Yorkshire swine kidneys (n=three animals). DRS allowed for rapid identification of ischemic areas after selective vessel occlusion. In addition, the rates of ischemia induction and recovery were compared for main renal artery versus tertiary segmental artery occlusion, and it was found that the tertiary segmental artery occlusion trends toward faster recovery after ischemia, which suggests a potential benefit of selective ischemia. Quantitative DRS could provide a convenient and fast tool for artery identification and evaluation of the depth, spatial extent, and duration of selective tissue ischemia in laparoscopic PN.
Ishibashi, Fumiyuki; Lisauskas, Jennifer B; Kawamura, Akio; Waxman, Sergio
2008-01-01
Yellow plaques seen during coronary angioscopy are thought to be the surrogates for superficial intimal lipids in coronary plaque. Given diffuse and heterogeneous nature of atherosclerosis, yellow plaques in coronaries may be seen as several yellow spots on diffuse coronary plaque. We examined the topographic association of yellow plaques with coronary plaque. In 40 non-severely stenotic ex-vivo coronary segments (average length: 52.2 +/- 3.1 mm), yellow plaques were examined by angioscopy with quantitative colorimetry. The segments were cut perpendicular to the long axis of the vessel at 2 mm intervals, and 1045 slides with 5 microm thick tissue for whole segments were prepared. To construct the plaque surface, each tissue slice was considered to be representative of the adjacent 2 mm. The circumference of the lumen and the lumen border of plaque were measured in each slide, and the plaque surface region was constructed. Coronary plaque was in 37 (93%) of 40 segments, and consisted of a single mass [39.9 +/- 3.9 (0-100) mm, 311.3 +/- 47.4 (0.0-1336.2) mm2]. In 30 (75%) segments, multiple (2-9) yellow plaques were detected on a mass of coronary plaque. The number of yellow plaques correlated positively with coronary plaque surface area (r = 0.77, P < 0.0001). Yellow plaques in coronaries detected by angioscopy with quantitative colorimetry, some of them are associated with lipid cores underneath thin fibrous caps, may be used to assess the extent of coronary plaque. Further research using angioscopy could be of value to study the association of high-risk coronaries with acute coronary syndromes.
Inferior vena cava segmentation with parameter propagation and graph cut.
Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing
2017-09-01
The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.
[Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].
Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie
At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.
Tracking and Motion Analysis of Crack Propagations in Crystals for Molecular Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsap, L V; Duchaineau, M; Goldgof, D B
2001-05-14
This paper presents a quantitative analysis for a discovery in molecular dynamics. Recent simulations have shown that velocities of crack propagations in crystals under certain conditions can become supersonic, which is contrary to classical physics. In this research, they present a framework for tracking and motion analysis of crack propagations in crystals. It includes line segment extraction based on Canny edge maps, feature selection based on physical properties, and subsequent tracking of primary and secondary wavefronts. This tracking is completely automated; it runs in real time on three 834-image sequences using forty 250 MHZ processors. Results supporting physical observations aremore » presented in terms of both feature tracking and velocity analysis.« less
Rashno, Abdolreza; Nazari, Behzad; Koozekanani, Dara D.; Drayna, Paul M.; Sadri, Saeed; Rabbani, Hossein
2017-01-01
A fully-automated method based on graph shortest path, graph cut and neutrosophic (NS) sets is presented for fluid segmentation in OCT volumes for exudative age related macular degeneration (EAMD) subjects. The proposed method includes three main steps: 1) The inner limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers are segmented using proposed methods based on graph shortest path in NS domain. A flattened RPE boundary is calculated such that all three types of fluid regions, intra-retinal, sub-retinal and sub-RPE, are located above it. 2) Seed points for fluid (object) and tissue (background) are initialized for graph cut by the proposed automated method. 3) A new cost function is proposed in kernel space, and is minimized with max-flow/min-cut algorithms, leading to a binary segmentation. Important properties of the proposed steps are proven and quantitative performance of each step is analyzed separately. The proposed method is evaluated using a publicly available dataset referred as Optima and a local dataset from the UMN clinic. For fluid segmentation in 2D individual slices, the proposed method outperforms the previously proposed methods by 18%, 21% with respect to the dice coefficient and sensitivity, respectively, on the Optima dataset, and by 16%, 11% and 12% with respect to the dice coefficient, sensitivity and precision, respectively, on the local UMN dataset. Finally, for 3D fluid volume segmentation, the proposed method achieves true positive rate (TPR) and false positive rate (FPR) of 90% and 0.74%, respectively, with a correlation of 95% between automated and expert manual segmentations using linear regression analysis. PMID:29059257
Automatic segmentation and reconstruction of the cortex from neonatal MRI.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Joseph V
2007-11-15
Segmentation and reconstruction of cortical surfaces from magnetic resonance (MR) images are more challenging for developing neonates than adults. This is mainly due to the dynamic changes in the contrast between gray matter (GM) and white matter (WM) in both T1- and T2-weighted images (T1w and T2w) during brain maturation. In particular in neonatal T2w images WM typically has higher signal intensity than GM. This causes mislabeled voxels during cortical segmentation, especially in the cortical regions of the brain and in particular at the interface between GM and cerebrospinal fluid (CSF). We propose an automatic segmentation algorithm detecting these mislabeled voxels and correcting errors caused by partial volume effects. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic expectation maximization (EM) scheme. Quantitative validation against manual segmentation demonstrates good performance (the mean Dice value: 0.758+/-0.037 for GM and 0.794+/-0.078 for WM). The inner, central and outer cortical surfaces are then reconstructed using implicit surface evolution. A landmark study is performed to verify the accuracy of the reconstructed cortex (the mean surface reconstruction error: 0.73 mm for inner surface and 0.63 mm for the outer). Both segmentation and reconstruction have been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. This preliminary analysis confirms previous findings that cortical surface area and curvature increase with age, and that surface area scales to cerebral volume according to a power law, while cortical thickness is not related to age or brain growth.
Paquette, Philippe; El Khamlichi, Youssef; Lamontagne, Martin; Higgins, Johanne; Gagnon, Dany H
2017-08-01
Quantitative ultrasound imaging is gaining popularity in research and clinical settings to measure the neuromechanical properties of the peripheral nerves such as their capability to glide in response to body segment movement. Increasing evidence suggests that impaired median nerve longitudinal excursion is associated with carpal tunnel syndrome. To date, psychometric properties of longitudinal nerve excursion measurements using quantitative ultrasound imaging have not been extensively investigated. This study investigates the convergent validity of the longitudinal nerve excursion by comparing measures obtained using quantitative ultrasound imaging with those determined with a motion analysis system. A 38-cm long rigid nerve-phantom model was used to assess the longitudinal excursion in a laboratory environment. The nerve-phantom model, immersed in a 20-cm deep container filled with a gelatin-based solution, was moved 20 times using a linear forward and backward motion. Three light-emitting diodes were used to record nerve-phantom excursion with a motion analysis system, while a 5-cm linear transducer allowed simultaneous recording via ultrasound imaging. Both measurement techniques yielded excellent association ( r = 0.99) and agreement (mean absolute difference between methods = 0.85 mm; mean relative difference between methods = 7.48 %). Small discrepancies were largely found when larger excursions (i.e. > 10 mm) were performed, revealing slight underestimation of the excursion by the ultrasound imaging analysis software. Quantitative ultrasound imaging is an accurate method to assess the longitudinal excursion of an in vitro nerve-phantom model and appears relevant for future research protocols investigating the neuromechanical properties of the peripheral nerves.
Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.
Liu, Shuang; Xie, Yiting; Reeves, Anthony P
2016-05-01
A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.
Semi-automated quantitative Drosophila wings measurements.
Loh, Sheng Yang Michael; Ogawa, Yoshitaka; Kawana, Sara; Tamura, Koichiro; Lee, Hwee Kuan
2017-06-28
Drosophila melanogaster is an important organism used in many fields of biological research such as genetics and developmental biology. Drosophila wings have been widely used to study the genetics of development, morphometrics and evolution. Therefore there is much interest in quantifying wing structures of Drosophila. Advancement in technology has increased the ease in which images of Drosophila can be acquired. However such studies have been limited by the slow and tedious process of acquiring phenotypic data. We have developed a system that automatically detects and measures key points and vein segments on a Drosophila wing. Key points are detected by performing image transformations and template matching on Drosophila wing images while vein segments are detected using an Active Contour algorithm. The accuracy of our key point detection was compared against key point annotations of users. We also performed key point detection using different training data sets of Drosophila wing images. We compared our software with an existing automated image analysis system for Drosophila wings and showed that our system performs better than the state of the art. Vein segments were manually measured and compared against the measurements obtained from our system. Our system was able to detect specific key points and vein segments from Drosophila wing images with high accuracy.
Automatic cortical segmentation in the developing brain.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V
2007-01-01
The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).
Quantitative Assessment of Heterogeneity in Tumor Metabolism Using FDG-PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vriens, Dennis, E-mail: d.vriens@nucmed.umcn.nl; Disselhorst, Jonathan A.; Oyen, Wim J.G.
2012-04-01
Purpose: [{sup 18}F]-fluorodeoxyglucose-positron emission tomography (FDG-PET) images are usually quantitatively analyzed in 'whole-tumor' volumes of interest. Also parameters determined with dynamic PET acquisitions, such as the Patlak glucose metabolic rate (MR{sub glc}) and pharmacokinetic rate constants of two-tissue compartment modeling, are most often derived per lesion. We propose segmentation of tumors to determine tumor heterogeneity, potentially useful for dose-painting in radiotherapy and elucidating mechanisms of FDG uptake. Methods and Materials: In 41 patients with 104 lesions, dynamic FDG-PET was performed. On MR{sub glc} images, tumors were segmented in quartiles of background subtracted maximum MR{sub glc} (0%-25%, 25%-50%, 50%-75%, and 75%-100%).more » Pharmacokinetic analysis was performed using an irreversible two-tissue compartment model in the three segments with highest MR{sub glc} to determine the rate constants of FDG metabolism. Results: From the highest to the lowest quartile, significant decreases of uptake (K{sub 1}), washout (k{sub 2}), and phosphorylation (k{sub 3}) rate constants were seen with significant increases in tissue blood volume fraction (V{sub b}). Conclusions: Tumor regions with highest MR{sub glc} are characterized by high cellular uptake and phosphorylation rate constants with relatively low blood volume fractions. In regions with less metabolic activity, the blood volume fraction increases and cellular uptake, washout, and phosphorylation rate constants decrease. These results support the hypothesis that regional tumor glucose phosphorylation rate is not dependent on the transport of nutrients (i.e., FDG) to the tumor.« less
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi
2013-01-01
Purpose To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. Materials and Methods Tissue excised from a genetically engineered mouse model of sarcoma was imaged using a subcellular resolution microendoscope after topical application of a fluorescent anatomical contrast agent: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Results Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. Conclusion The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue. PMID:23824589
Mueller, Jenna L; Harmany, Zachary T; Mito, Jeffrey K; Kennedy, Stephanie A; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G; Willett, Rebecca M; Brown, J Quincy; Ramanujam, Nimmi
2013-01-01
To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. TISSUE EXCISED FROM A GENETICALLY ENGINEERED MOUSE MODEL OF SARCOMA WAS IMAGED USING A SUBCELLULAR RESOLUTION MICROENDOSCOPE AFTER TOPICAL APPLICATION OF A FLUORESCENT ANATOMICAL CONTRAST AGENT: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue.
Wörz, Stefan; Schenk, Jens-Peter; Alrajab, Abdulsattar; von Tengg-Kobligk, Hendrik; Rohr, Karl; Arnold, Raoul
2016-10-17
Coarctation of the aorta is one of the most common congenital heart diseases. Despite different treatment opportunities, long-term outcome after surgical or interventional therapy is diverse. Serial morphologic follow-up of vessel growth is necessary, because vessel growth cannot be predicted by primer morphology or a therapeutic option. For the analysis of the long-term outcome after therapy of congenital diseases such as aortic coarctation, accurate 3D geometric analysis of the aorta from follow-up 3D medical image data such as magnetic resonance angiography (MRA) is important. However, for an objective, fast, and accurate 3D geometric analysis, an automatic approach for 3D segmentation and quantification of the aorta from pediatric images is required. We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model that requires only relatively few model parameters. Moreover, we include a novel adaptive background-masking scheme used for least-squares model fitting, we use a spatial normalization scheme to align the segmentation results from follow-up examinations, and we determine relevant 3D geometric parameters of the aortic arch. We have evaluated our proposed approach using different 3D synthetic images. Moreover, we have successfully applied the approach to follow-up pediatric 3D MRA image data, we have normalized the 3D segmentation results of follow-up images of individual patients, and we have combined the results of all patients. We also present a quantitative evaluation of our approach for four follow-up 3D MRA images of a patient, which confirms that our approach yields accurate 3D segmentation results. An experimental comparison with two previous approaches demonstrates that our approach yields superior results. From the results, we found that our approach is well suited for the quantification of the 3D geometry of the aortic arch from follow-up pediatric 3D MRA image data. In future work, this will enable to investigate the long-term outcome of different surgical and interventional therapies for aortic coarctation.
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
Nogami, Yoshie; Ishizu, Tomoko; Atsumi, Akiko; Yamamoto, Masayoshi; Kawamura, Ryo; Seo, Yoshihiro; Aonuma, Kazutaka
2013-03-01
Recently developed vector flow mapping (VFM) enables evaluation of local flow dynamics without angle dependency. This study used VFM to evaluate quantitatively the index of intraventricular haemodynamic kinetic energy in patients with left ventricular (LV) diastolic dysfunction and to compare those with normal subjects. We studied 25 patients with estimated high left atrial (LA) pressure (pseudonormal: PN group) and 36 normal subjects (control group). Left ventricle was divided into basal, mid, and apical segments. Intraventricular haemodynamic energy was evaluated in the dimension of speed, and it was defined as the kinetic energy index. We calculated this index and created time-energy index curves. The time interval from electrocardiogram (ECG) R wave to peak index was measured, and time differences of the peak index between basal and other segments were defined as ΔT-mid and ΔT-apex. In both groups, early diastolic peak kinetic energy index in mid and apical segments was significantly lower than that in the basal segment. Time to peak index did not differ in apex, mid, and basal segments in the control group but was significantly longer in the apex than that in the basal segment in the PN group. ΔT-mid and ΔT-apex were significantly larger in the PN group than the control group. Multiple regression analysis showed sphericity index, E/E' to be significant independent variables determining ΔT apex. Retarded apical kinetic energy fluid dynamics were detected using VFM and were closely associated with LV spherical remodelling in patients with high LA pressure.
Vrzheshch, P V
2015-01-01
Quantitative evaluation of the accuracy of the rapid equilibrium assumption in the steady-state enzyme kinetics was obtained for an arbitrary mechanism of an enzyme-catalyzed reaction. This evaluation depends only on the structure and properties of the equilibrium segment, but doesn't depend on the structure and properties of the rest (stationary part) of the kinetic scheme. The smaller the values of the edges leaving equilibrium segment in relation to values of the edges within the equilibrium segment, the higher the accuracy of determination of intermediate concentrations and reaction velocity in a case of the rapid equilibrium assumption.
Bagci, Ulas; Foster, Brent; Miller-Jaster, Kirsten; Luna, Brian; Dey, Bappaditya; Bishai, William R; Jonsson, Colleen B; Jain, Sanjay; Mollura, Daniel J
2013-07-23
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers' understanding of infectious diseases. We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists' delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.
Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong
2016-08-01
The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.
A method for three-dimensional quantitative observation of the microstructure of biological samples
NASA Astrophysics Data System (ADS)
Wang, Pengfei; Chen, Dieyan; Ma, Wanyun; Wu, Hongxin; Ji, Liang; Sun, Jialin; Lv, Danyu; Zhang, Lu; Li, Ying; Tian, Ning; Zheng, Jinggao; Zhao, Fengying
2009-07-01
Contemporary biology has developed into the era of cell biology and molecular biology, and people try to study the mechanism of all kinds of biological phenomena at the microcosmic level now. Accurate description of the microstructure of biological samples is exigent need from many biomedical experiments. This paper introduces a method for 3-dimensional quantitative observation on the microstructure of vital biological samples based on two photon laser scanning microscopy (TPLSM). TPLSM is a novel kind of fluorescence microscopy, which has excellence in its low optical damage, high resolution, deep penetration depth and suitability for 3-dimensional (3D) imaging. Fluorescent stained samples were observed by TPLSM, and afterward the original shapes of them were obtained through 3D image reconstruction. The spatial distribution of all objects in samples as well as their volumes could be derived by image segmentation and mathematic calculation. Thus the 3-dimensionally and quantitatively depicted microstructure of the samples was finally derived. We applied this method to quantitative analysis of the spatial distribution of chromosomes in meiotic mouse oocytes at metaphase, and wonderful results came out last.
Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui
2013-12-01
The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results were obtained with normalization by both the computer estimated and pathologist estimated epithelium percentage. Our results show that estimation of tissue epithelium percentage using our color-based segmentation method correlates well with pathologists' estimation of tissue epithelium percentages. The epithelium contents estimated by color-based segmentation may be useful in immuno-based analysis or clinical proteomic analysis of tumor proteins. The codes used for epithelium estimation as well as the micrographs with estimated epithelium content are available online.
Quantitative analysis of cardiovascular MR images.
van der Geest, R J; de Roos, A; van der Wall, E E; Reiber, J H
1997-06-01
The diagnosis of cardiovascular disease requires the precise assessment of both morphology and function. Nearly all aspects of cardiovascular function and flow can be quantified nowadays with fast magnetic resonance (MR) imaging techniques. Conventional and breath-hold cine MR imaging allow the precise and highly reproducible assessment of global and regional left ventricular function. During the same examination, velocity encoded cine (VEC) MR imaging provides measurements of blood flow in the heart and great vessels. Quantitative image analysis often still relies on manual tracing of contours in the images. Reliable automated or semi-automated image analysis software would be very helpful to overcome the limitations associated with the manual and tedious processing of the images. Recent progress in MR imaging of the coronary arteries and myocardial perfusion imaging with contrast media, along with the further development of faster imaging sequences, suggest that MR imaging could evolve into a single technique ('one stop shop') for the evaluation of many aspects of heart disease. As a result, it is very likely that the need for automated image segmentation and analysis software algorithms will further increase. In this paper the developments directed towards the automated image analysis and semi-automated contour detection for cardiovascular MR imaging are presented.
Single-molecule Protein Unfolding in Solid State Nanopores
Talaga, David S.; Li, Jiali
2009-01-01
We use single silicon nitride nanopores to study folded, partially folded and unfolded single proteins by measuring their excluded volumes. The DNA-calibrated translocation signals of β-lactoglobulin and histidine-containing phosphocarrier protein match quantitatively with that predicted by a simple sum of the partial volumes of the amino acids in the polypeptide segment inside the pore when translocation stalls due to the primary charge sequence. Our analysis suggests that the majority of the protein molecules were linear or looped during translocation and that the electrical forces present under physiologically relevant potentials can unfold proteins. Our results show that the nanopore translocation signals are sensitive enough to distinguish the folding state of a protein and distinguish between proteins based on the excluded volume of a local segment of the polypeptide chain that transiently stalls in the nanopore due to the primary sequence of charges. PMID:19530678
Functional Advantages of Conserved Intrinsic Disorder in RNA-Binding Proteins.
Varadi, Mihaly; Zsolyomi, Fruzsina; Guharoy, Mainak; Tompa, Peter
2015-01-01
Proteins form large macromolecular assemblies with RNA that govern essential molecular processes. RNA-binding proteins have often been associated with conformational flexibility, yet the extent and functional implications of their intrinsic disorder have never been fully assessed. Here, through large-scale analysis of comprehensive protein sequence and structure datasets we demonstrate the prevalence of intrinsic structural disorder in RNA-binding proteins and domains. We addressed their functionality through a quantitative description of the evolutionary conservation of disordered segments involved in binding, and investigated the structural implications of flexibility in terms of conformational stability and interface formation. We conclude that the functional role of intrinsically disordered protein segments in RNA-binding is two-fold: first, these regions establish extended, conserved electrostatic interfaces with RNAs via induced fit. Second, conformational flexibility enables them to target different RNA partners, providing multi-functionality, while also ensuring specificity. These findings emphasize the functional importance of intrinsically disordered regions in RNA-binding proteins.
Major advances in testing of dairy products: milk component and dairy product attribute testing.
Barbano, D M; Lynch, J M
2006-04-01
Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.
The Edge Detectors Suitable for Retinal OCT Image Segmentation
Yang, Jing; Gao, Qian; Zhou, Sheng
2017-01-01
Retinal layer thickness measurement offers important information for reliable diagnosis of retinal diseases and for the evaluation of disease development and medical treatment responses. This task critically depends on the accurate edge detection of the retinal layers in OCT images. Here, we intended to search for the most suitable edge detectors for the retinal OCT image segmentation task. The three most promising edge detection algorithms were identified in the related literature: Canny edge detector, the two-pass method, and the EdgeFlow technique. The quantitative evaluation results show that the two-pass method outperforms consistently the Canny detector and the EdgeFlow technique in delineating the retinal layer boundaries in the OCT images. In addition, the mean localization deviation metrics show that the two-pass method caused the smallest edge shifting problem. These findings suggest that the two-pass method is the best among the three algorithms for detecting retinal layer boundaries. The overall better performance of Canny and two-pass methods over EdgeFlow technique implies that the OCT images contain more intensity gradient information than texture changes along the retinal layer boundaries. The results will guide our future efforts in the quantitative analysis of retinal OCT images for the effective use of OCT technologies in the field of ophthalmology. PMID:29065594
Paracka, Lejla; Wegner, Florian; Blahak, Christian; Abdallat, Mahmoud; Saryyeva, Assel; Dressler, Dirk; Karst, Matthias; Krauss, Joachim K
2017-01-01
Abnormalities in the somatosensory system are increasingly being recognized in patients with dystonia. The aim of this study was to investigate whether sensory abnormalities are confined to the dystonic body segments or whether there is a wider involvement in patients with idiopathic dystonia. For this purpose, we recruited 20 patients, 8 had generalized, 5 had segmental dystonia with upper extremity involvement, and 7 had cervical dystonia. In total, there were 13 patients with upper extremity involvement. We used Quantitative Sensory Testing (QST) at the back of the hand in all patients and at the shoulder in patients with cervical dystonia. The main finding on the hand QST was impaired cold detection threshold (CDT), dynamic mechanical allodynia (DMA), and thermal sensory limen (TSL). The alterations were present on both hands, but more pronounced on the side more affected with dystonia. Patients with cervical dystonia showed a reduced CDT and hot detection threshold (HDT), enhanced TSL and DMA at the back of the hand, whereas the shoulder QST only revealed increased cold pain threshold and DMA. In summary, QST clearly shows distinct sensory abnormalities in patients with idiopathic dystonia, which may also manifest in body regions without evident dystonia. Further studies with larger groups of dystonia patients are needed to prove the consistency of these findings.
Automated podosome identification and characterization in fluorescence microscopy images.
Meddens, Marjolein B M; Rieger, Bernd; Figdor, Carl G; Cambi, Alessandra; van den Dries, Koen
2013-02-01
Podosomes are cellular adhesion structures involved in matrix degradation and invasion that comprise an actin core and a ring of cytoskeletal adaptor proteins. They are most often identified by staining with phalloidin, which binds F-actin and therefore visualizes the core. However, not only podosomes, but also many other cytoskeletal structures contain actin, which makes podosome segmentation by automated image processing difficult. Here, we have developed a quantitative image analysis algorithm that is optimized to identify podosome cores within a typical sample stained with phalloidin. By sequential local and global thresholding, our analysis identifies up to 76% of podosome cores excluding other F-actin-based structures. Based on the overlap in podosome identifications and quantification of podosome numbers, our algorithm performs equally well compared to three experts. Using our algorithm we show effects of actin polymerization and myosin II inhibition on the actin intensity in both podosome core and associated actin network. Furthermore, by expanding the core segmentations, we reveal a previously unappreciated differential distribution of cytoskeletal adaptor proteins within the podosome ring. These applications illustrate that our algorithm is a valuable tool for rapid and accurate large-scale analysis of podosomes to increase our understanding of these characteristic adhesion structures.
Analysis of anabolic steroids in hair: time courses in guinea pigs.
Shen, Min; Xiang, Ping; Yan, Hui; Shen, Baohua; Wang, Mengye
2009-09-01
Sensitive, specific, and reproducible methods for the quantitative determination of eight anabolic steroids in guinea pig hair have been developed using LC/MS/MS and GC/MS/MS. Methyltestosterone, stanozolol, methandienone, nandrolone, trenbolone, boldenone, methenolone and DHEA were administered intraperitoneally in guinea pigs. After the first injection, black hair segments were collected on shaved areas of skin. The analysis of these segments revealed the distribution of anabolic steroids in the guinea pig hair. The major components in hair are the parent anabolic steroids. The time courses of the concentrations of the steroids in hair (except methenolone, which does not deposit in hair) demonstrated that the peak concentrations were reached on days 2-4, except stanozolol, which peaked on day 10 after administration. The concentrations in hair appeared to be related to the physicochemical properties of the drug compound and to the dosage. These studies on the distribution of drugs in the hair shaft and on the time course of their concentration changes provide information relevant to the optimal time and method of collecting hair samples. Such studies also provide basic data that will be useful in the application of hair analysis in the control of doping and in the interpretation of results.
Four-point bending as a method for quantitatively evaluating spinal arthrodesis in a rat model.
Robinson, Samuel T; Svet, Mark T; Kanim, Linda A; Metzger, Melodie F
2015-02-01
The most common method of evaluating the success (or failure) of rat spinal fusion procedures is manual palpation testing. Whereas manual palpation provides only a subjective binary answer (fused or not fused) regarding the success of a fusion surgery, mechanical testing can provide more quantitative data by assessing variations in strength among treatment groups. We here describe a mechanical testing method to quantitatively assess single-level spinal fusion in a rat model, to improve on the binary and subjective nature of manual palpation as an end point for fusion-related studies. We tested explanted lumbar segments from Sprague-Dawley rat spines after single-level posterolateral fusion procedures at L4-L5. Segments were classified as 'not fused,' 'restricted motion,' or 'fused' by using manual palpation testing. After thorough dissection and potting of the spine, 4-point bending in flexion then was applied to the L4-L5 motion segment, and stiffness was measured as the slope of the moment-displacement curve. Results demonstrated statistically significant differences in stiffness among all groups, which were consistent with preliminary grading according to manual palpation. In addition, the 4-point bending results provided quantitative information regarding the quality of the bony union formed and therefore enabled the comparison of fused specimens. Our results demonstrate that 4-point bending is a simple, reliable, and effective way to describe and compare results among rat spines after fusion surgery.
Anguera, M. Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2018-01-01
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts. PMID:29441028
Láng, G; Kufcsák, O; Szegletes, T; Nemcsók, J
1997-07-01
1. The cholinesterases play an important role in the innervation of organs. The ratio of solubilized to membrane-bound cholinesterase and the quantitative distributions of acetylcholinesterase and butyrylcholinesterase were measured in different segments of the gut of carp (Cyprinus carpio) connected with different types of nerve-muscle synapses in different parts of the alimentary tract. 2. The inhibition of acetylcholinesterase (EC 3.1.1.7.) by the herbicide paraquat and the insecticide metidathion was measured in different parts of the gut of carp. 3. Metidathion and paraquat significantly decreased the activity of acetylcholinesterase in different segments of the alimentary tract of common carp, in a concentration-dependent manner.
High-resolution gene expression data from blastoderm embryos of the scuttle fly Megaselia abdita
Wotton, Karl R; Jiménez-Guri, Eva; Crombach, Anton; Cicin-Sain, Damjan; Jaeger, Johannes
2015-01-01
Gap genes are involved in segment determination during early development in dipteran insects (flies, midges, and mosquitoes). We carried out a systematic quantitative comparative analysis of the gap gene network across different dipteran species. Our work provides mechanistic insights into the evolution of this pattern-forming network. As a central component of our project, we created a high-resolution quantitative spatio-temporal data set of gap and maternal co-ordinate gene expression in the blastoderm embryo of the non-drosophilid scuttle fly, Megaselia abdita. Our data include expression patterns in both wild-type and RNAi-treated embryos. The data—covering 10 genes, 10 time points, and over 1,000 individual embryos—consist of original embryo images, quantified expression profiles, extracted positions of expression boundaries, and integrated expression patterns, plus metadata and intermediate processing steps. These data provide a valuable resource for researchers interested in the comparative study of gene regulatory networks and pattern formation, an essential step towards a more quantitative and mechanistic understanding of developmental evolution. PMID:25977812
NASA Technical Reports Server (NTRS)
Rubin, D. N.; Yazbek, N.; Garcia, M. J.; Stewart, W. J.; Thomas, J. D.
2000-01-01
Harmonic imaging is a new ultrasonographic technique that is designed to improve image quality by exploiting the spontaneous generation of higher frequencies as ultrasound propagates through tissue. We studied 51 difficult-to-image patients with blinded side-by-side cineloop evaluation of endocardial border definition by harmonic versus fundamental imaging. In addition, quantitative intensities from cavity versus wall were compared for harmonic versus fundamental imaging. Harmonic imaging improved left ventricular endocardial border delineation over fundamental imaging (superior: harmonic = 71.1%, fundamental = 18.7%; similar: 10.2%; P <.001). Quantitative analysis of 100 wall/cavity combinations demonstrated brighter wall segments and more strikingly darker cavities during harmonic imaging (cavity intensity on a 0 to 255 scale: fundamental = 15.6 +/- 8.6; harmonic = 6.0 +/- 5.3; P <.0001), which led to enhanced contrast between the wall and cavity (1.89 versus 1.19, P <.0001). Harmonic imaging reduces side-lobe artifacts, resulting in a darker cavity and brighter walls, thereby improving image contrast and endocardial delineation.
Bragman, Felix J.S.; McClelland, Jamie R.; Jacob, Joseph; Hurst, John R.; Hawkes, David J.
2017-01-01
A fully automated, unsupervised lobe segmentation algorithm is presented based on a probabilistic segmentation of the fissures and the simultaneous construction of a population model of the fissures. A two-class probabilistic segmentation segments the lung into candidate fissure voxels and the surrounding parenchyma. This was combined with anatomical information and a groupwise fissure prior to drive non-parametric surface fitting to obtain the final segmentation. The performance of our fissure segmentation was validated on 30 patients from the COPDGene cohort, achieving a high median F1-score of 0.90 and showed general insensitivity to filter parameters. We evaluated our lobe segmentation algorithm on the LOLA11 dataset, which contains 55 cases at varying levels of pathology. We achieved the highest score of 0.884 of the automated algorithms. Our method was further tested quantitatively and qualitatively on 80 patients from the COPDGene study at varying levels of functional impairment. Accurate segmentation of the lobes is shown at various degrees of fissure incompleteness for 96% of all cases. We also show the utility of including a groupwise prior in segmenting the lobes in regions of grossly incomplete fissures. PMID:28436850
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ripley, S.; Wakefield, T.; Spaulding, S.
1985-05-01
In this investigation platelet deposition in polytetrafluroethylene (PTFE) thoracoabdominal grafts (TAC's) was evaluated using two different semi-quantitative techniques. Ten PTFE TAG's 6 mm in diameter and 30 cm in length were inserted into 10 mongrel dogs. One, 4 and 6 weeks after graft implantation the animals were injected with autologous In-111 platelets labelled by a modified Thakur technique. Platelet imaging in grafts was performed 48 hrs after injection. Blood pool was determined by Tc99m labelled RBC's (in vivo/in vitro technique). Semi-quantitative analysis was performed by subdividing the imaged graft into three major regions and selecting a reference region from eithermore » the native aorta or common iliac artery. Excess platelet deposition was determined by two methods: 1) the ratio of In-111 counts in the graft ROI''s to the reference region and 2) the percent In-111 excess using the Tc99m blood pool subtraction technique (TBPST). Animals were sacrificed 7 weeks after implantation and radioactivity in the excised grafts was determined using a well counter. A positive correlation was found to exist between the In-111 ratio percent analysis (IRPA) and the direct gamma counting (DCC) for all three segments of the prosthetic graft. Correlation coefficients for the thorax, midsegment and abdominal segments were 0.80, 0.73 and 0.48 respectivly. There was no correlation between TBPST and DGC. Using the IRPA technique the thrombogenicity of TAG's can be routinely assessed and is clinically applicable for patient use. TBPST should probably be limited to the extremities to avoid error due to free Tc99m counts from kidneys and ureters.« less
Herrera, Victoria L. M.; Pasion, Khristine A.; Moran, Ann Marie; Ruiz-Opazo, Nelson
2012-01-01
The detection of multiple sex-specific blood pressure (BP) quantitative trait loci (QTLs) in independent total genome analyses of F2 (Dahl S x R)-intercross male and female rat cohorts confirms clinical observations of sex-specific disease cause and response to treatment among hypertensive patients, and mandate the identification of sex-specific hypertension genes/mechanisms. We developed and studied two congenic strains, S.R5A and S.R5B introgressing Dahl R-chromosome 5 segments into Dahl S chromosome 5 region spanning putative BP-f1 and BP-f2 QTLs. Radiotelemetric non-stressed 24-hour BP analysis at four weeks post-high salt diet (8% NaCl) challenge, identified only S.R5B congenic rats with lower SBP (−26.5 mmHg, P = 0.002), DBP (−23.7 mmHg, P = 0.004) and MAP (−25.1 mmHg, P = 0.002) compared with Dahl S female controls at four months of age confirming BP-f1 but not BP-f2 QTL on rat chromosome 5. The S.R5B congenic segment did not affect pulse pressure and relative heart weight indicating that the gene underlying BP-f1 does not influence arterial stiffness and cardiac hypertrophy. The results of our congenic analysis narrowed BP-f1 to chromosome 5 coordinates 134.9–141.5 Mbp setting up the basis for further fine mapping of BP-f1 and eventual identification of the specific gene variant accounting for BP-f1 effect on blood pressure. PMID:22860086
Zerjatke, Thomas; Gak, Igor A; Kirova, Dilyana; Fuhrmann, Markus; Daniel, Katrin; Gonciarz, Magdalena; Müller, Doris; Glauche, Ingmar; Mansfeld, Jörg
2017-05-30
Cell cycle kinetics are crucial to cell fate decisions. Although live imaging has provided extensive insights into this relationship at the single-cell level, the limited number of fluorescent markers that can be used in a single experiment has hindered efforts to link the dynamics of individual proteins responsible for decision making directly to cell cycle progression. Here, we present fluorescently tagged endogenous proliferating cell nuclear antigen (PCNA) as an all-in-one cell cycle reporter that allows simultaneous analysis of cell cycle progression, including the transition into quiescence, and the dynamics of individual fate determinants. We also provide an image analysis pipeline for automated segmentation, tracking, and classification of all cell cycle phases. Combining the all-in-one reporter with labeled endogenous cyclin D1 and p21 as prime examples of cell-cycle-regulated fate determinants, we show how cell cycle and quantitative protein dynamics can be simultaneously extracted to gain insights into G1 phase regulation and responses to perturbations. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Blackboard architecture for medical image interpretation
NASA Astrophysics Data System (ADS)
Davis, Darryl N.; Taylor, Christopher J.
1991-06-01
There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.
Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng
2015-12-01
We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.
3D Filament Network Segmentation with Multiple Active Contours
NASA Astrophysics Data System (ADS)
Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei
2014-03-01
Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and microtubules. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we developed a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D TIRF Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy.
Measuring single-cell gene expression dynamics in bacteria using fluorescence time-lapse microscopy
Young, Jonathan W; Locke, James C W; Altinok, Alphan; Rosenfeld, Nitzan; Bacarian, Tigran; Swain, Peter S; Mjolsness, Eric; Elowitz, Michael B
2014-01-01
Quantitative single-cell time-lapse microscopy is a powerful method for analyzing gene circuit dynamics and heterogeneous cell behavior. We describe the application of this method to imaging bacteria by using an automated microscopy system. This protocol has been used to analyze sporulation and competence differentiation in Bacillus subtilis, and to quantify gene regulation and its fluctuations in individual Escherichia coli cells. The protocol involves seeding and growing bacteria on small agarose pads and imaging the resulting microcolonies. Images are then reviewed and analyzed using our laboratory's custom MATLAB analysis code, which segments and tracks cells in a frame-to-frame method. This process yields quantitative expression data on cell lineages, which can illustrate dynamic expression profiles and facilitate mathematical models of gene circuits. With fast-growing bacteria, such as E. coli or B. subtilis, image acquisition can be completed in 1 d, with an additional 1–2 d for progressing through the analysis procedure. PMID:22179594
Yushkevich, Paul A.; Amaral, Robert S. C.; Augustinack, Jean C.; Bender, Andrew R.; Bernstein, Jeffrey D.; Boccardi, Marina; Bocchetta, Martina; Burggren, Alison C.; Carr, Valerie A.; Chakravarty, M. Mallar; Chetelat, Gael; Daugherty, Ana M.; Davachi, Lila; Ding, Song-Lin; Ekstrom, Arne; Geerlings, Mirjam I.; Hassan, Abdul; Huang, Yushan; Iglesias, Eugenio; La Joie, Renaud; Kerchner, Geoffrey A.; LaRocque, Karen F.; Libby, Laura A.; Malykhin, Nikolai; Mueller, Susanne G.; Olsen, Rosanna K.; Palombo, Daniela J.; Parekh, Mansi B; Pluta, John B.; Preston, Alison R.; Pruessner, Jens C.; Ranganath, Charan; Raz, Naftali; Schlichting, Margaret L.; Schoemaker, Dorothee; Singh, Sachi; Stark, Craig E. L.; Suthana, Nanthia; Tompary, Alexa; Turowski, Marta M.; Van Leemput, Koen; Wagner, Anthony D.; Wang, Lei; Winterburn, Julie L.; Wisse, Laura E.M.; Yassa, Michael A.; Zeineh, Michael M.
2015-01-01
OBJECTIVE An increasing number of human in vivo magnetic resonance imaging (MRI) studies have focused on examining the structure and function of the subfields of the hippocampal formation (the dentate gyrus, CA fields 1–3, and the subiculum) and subregions of the parahippocampal gyrus (entorhinal, perirhinal, and parahippocampal cortices). The ability to interpret the results of such studies and to relate them to each other would be improved if a common standard existed for labeling hippocampal subfields and parahippocampal subregions. Currently, research groups label different subsets of structures and use different rules, landmarks, and cues to define their anatomical extents. This paper characterizes, both qualitatively and quantitatively, the variability in the existing manual segmentation protocols for labeling hippocampal and parahippocampal substructures in MRI, with the goal of guiding subsequent work on developing a harmonized substructure segmentation protocol. METHOD MRI scans of a single healthy adult human subject were acquired both at 3 Tesla and 7 Tesla. Representatives from 21 research groups applied their respective manual segmentation protocols to the MRI modalities of their choice. The resulting set of 21 segmentations was analyzed in a common anatomical space to quantify similarity and identify areas of agreement. RESULTS The differences between the 21 protocols include the region within which segmentation is performed, the set of anatomical labels used, and the extents of specific anatomical labels. The greatest overall disagreement among the protocols is at the CA1/subiculum boundary, and disagreement across all structures is greatest in the anterior portion of the hippocampal formation relative to the body and tail. CONCLUSIONS The combined examination of the 21 protocols in the same dataset suggests possible strategies towards developing a harmonized subfield segmentation protocol and facilitates comparison between published studies. PMID:25596463
Shi, Peng; Zhong, Jing; Hong, Jinsheng; Huang, Rongfang; Wang, Kaijun; Chen, Yunbin
2016-08-26
Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.
Automated volumetric segmentation of retinal fluid on optical coherence tomography
Wang, Jie; Zhang, Miao; Pechauer, Alex D.; Liu, Liang; Hwang, Thomas S.; Wilson, David J.; Li, Dengwang; Jia, Yali
2016-01-01
We propose a novel automated volumetric segmentation method to detect and quantify retinal fluid on optical coherence tomography (OCT). The fuzzy level set method was introduced for identifying the boundaries of fluid filled regions on B-scans (x and y-axes) and C-scans (z-axis). The boundaries identified from three types of scans were combined to generate a comprehensive volumetric segmentation of retinal fluid. Then, artefactual fluid regions were removed using morphological characteristics and by identifying vascular shadowing with OCT angiography obtained from the same scan. The accuracy of retinal fluid detection and quantification was evaluated on 10 eyes with diabetic macular edema. Automated segmentation had good agreement with manual segmentation qualitatively and quantitatively. The fluid map can be integrated with OCT angiogram for intuitive clinical evaluation. PMID:27446676
NASA Astrophysics Data System (ADS)
Fetita, Catalin; Tarando, Sebastian; Brillet, Pierre-Yves; Grenier, Philippe A.
2016-03-01
Correct segmentation and labeling of lungs in thorax MSCT is a requirement in pulmonary/respiratory disease analysis as a basis for further processing or direct quantitative measures: lung texture classification, respiratory functional simulations, intrapulmonary vascular remodeling evaluation, detection of pleural effusion or subpleural opacities, are only few clinical applications related to this requirement. Whereas lung segmentation appears trivial for normal anatomo-pathological conditions, the presence of disease may complicate this task for fully-automated algorithms. The challenges come either from regional changes of lung texture opacity or from complex anatomic configurations (e.g., thin septum between lungs making difficult proper lung separation). They make difficult or even impossible the use of classic algorithms based on adaptive thresholding, 3-D connected component analysis and shape regularization. The objective of this work is to provide a robust segmentation approach of the pulmonary field, with individualized labeling of the lungs, able to overcome the mentioned limitations. The proposed approach relies on 3-D mathematical morphology and exploits the concept of controlled relief flooding (to identify contrasted lung areas) together with patient-specific shape properties for peripheral dense tissue detection. Tested on a database of 40 MSCT of pathological lungs, the proposed approach showed correct identification of lung areas with high sensitivity and specificity in locating peripheral dense opacities.
Analysis of live cell images: Methods, tools and opportunities.
Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens
2017-02-15
Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.
Johnson, Heath E; Haugh, Jason M
2013-12-02
This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.
Phase transformation changes in thermocycled nickel-titanium orthodontic wires.
Berzins, David W; Roberts, Howard W
2010-07-01
In the oral environment, orthodontic wires will be subject to thermal fluctuations. The purpose of this study was to investigate the effect of thermocycling on nickel-titanium (NiTi) wire phase transformations. Straight segments from single 27 and 35 degrees C copper NiTi (Ormco), Sentalloy (GAC), and Nitinol Heat Activated (3M Unitek) archwires were sectioned into 5mm segments (n=20). A control group consisted of five randomly selected non-thermocycled segments. The remaining segments were thermocycled between 5 and 55 degrees C with five randomly selected segments analyzed with differential scanning calorimetry (DSC; -100<-->150 degrees C at 10 degrees C/min) after 1000, 5000, and 10,000 cycles. Thermal peaks were evaluated with results analyzed via ANOVA (alpha=0.05). Nitinol HA and Sentalloy did not demonstrate qualitative or quantitative phase transformation behavior differences. Significant differences were observed in some of the copper NiTi transformation temperatures, as well as the heating enthalpy with the 27 degrees C copper NiTi wires (p<0.05). Qualitatively, with increased thermocycling the extent of R-phase in the heating peaks decreased in the 35 degrees C copper NiTi, and an austenite to martensite peak shoulder developed during cooling in the 27 degrees C copper NiTi. Repeated temperature fluctuations may contribute to qualitative and quantitative phase transformation changes in some NiTi wires. Copyright 2010 Academy of Dental Materials. All rights reserved.
Patient-specific coronary blood supply territories for quantitative perfusion analysis
Zakkaroff, Constantine; Biglands, John D.; Greenwood, John P.; Plein, Sven; Boyle, Roger D.; Radjenovic, Aleksandra; Magee, Derek R.
2018-01-01
Abstract Myocardial perfusion imaging, coupled with quantitative perfusion analysis, provides an important diagnostic tool for the identification of ischaemic heart disease caused by coronary stenoses. The accurate mapping between coronary anatomy and under-perfused areas of the myocardium is important for diagnosis and treatment. However, in the absence of the actual coronary anatomy during the reporting of perfusion images, areas of ischaemia are allocated to a coronary territory based on a population-derived 17-segment (American Heart Association) AHA model of coronary blood supply. This work presents a solution for the fusion of 2D Magnetic Resonance (MR) myocardial perfusion images and 3D MR angiography data with the aim to improve the detection of ischaemic heart disease. The key contribution of this work is a novel method for the mediated spatiotemporal registration of perfusion and angiography data and a novel method for the calculation of patient-specific coronary supply territories. The registration method uses 4D cardiac MR cine series spanning the complete cardiac cycle in order to overcome the under-constrained nature of non-rigid slice-to-volume perfusion-to-angiography registration. This is achieved by separating out the deformable registration problem and solving it through phase-to-phase registration of the cine series. The use of patient-specific blood supply territories in quantitative perfusion analysis (instead of the population-based model of coronary blood supply) has the potential of increasing the accuracy of perfusion analysis. Quantitative perfusion analysis diagnostic accuracy evaluation with patient-specific territories against the AHA model demonstrates the value of the mediated spatiotemporal registration in the context of ischaemic heart disease diagnosis. PMID:29392098
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Tan, Maxine; McMeekin, Scott; Thai, Theresa; Moore, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2015-03-01
The purpose of this study is to identify and apply quantitative image biomarkers for early prediction of the tumor response to the chemotherapy among the ovarian cancer patients participated in the clinical trials of testing new drugs. In the experiment, we retrospectively selected 30 cases from the patients who participated in Phase I clinical trials of new drug or drug agents for ovarian cancer treatment. Each case is composed of two sets of CT images acquired pre- and post-treatment (4-6 weeks after starting treatment). A computer-aided detection (CAD) scheme was developed to extract and analyze the quantitative image features of the metastatic tumors previously tracked by the radiologists using the standard Response Evaluation Criteria in Solid Tumors (RECIST) guideline. The CAD scheme first segmented 3-D tumor volumes from the background using a hybrid tumor segmentation scheme. Then, for each segmented tumor, CAD computed three quantitative image features including the change of tumor volume, tumor CT number (density) and density variance. The feature changes were calculated between the matched tumors tracked on the CT images acquired pre- and post-treatments. Finally, CAD predicted patient's 6-month progression-free survival (PFS) using a decision-tree based classifier. The performance of the CAD scheme was compared with the RECIST category. The result shows that the CAD scheme achieved a prediction accuracy of 76.7% (23/30 cases) with a Kappa coefficient of 0.493, which is significantly higher than the performance of RECIST prediction with a prediction accuracy and Kappa coefficient of 60% (17/30) and 0.062, respectively. This study demonstrated the feasibility of analyzing quantitative image features to improve the early predicting accuracy of the tumor response to the new testing drugs or therapeutic methods for the ovarian cancer patients.
Hattingen, Elke; Jurcoane, Alina; Daneshvar, Keivan; Pilatus, Ulrich; Mittelbronn, Michel; Steinbach, Joachim P.; Bähr, Oliver
2013-01-01
Background Anti-angiogenic treatment in recurrent glioblastoma patients suppresses contrast enhancement and reduces vasogenic edema while non-enhancing tumor progression is common. Thus, the importance of T2-weighted imaging is increasing. We therefore quantified T2 relaxation times, which are the basis for the image contrast on T2-weighted images. Methods Conventional and quantitative MRI procedures were performed on 18 patients with recurrent glioblastoma before treatment with bevacizumab and every 8 weeks thereafter until further tumor progression. We segmented the tumor on conventional MRI into 3 subvolumes: enhancing tumor, non-enhancing tumor, and edema. Using coregistered quantitative maps, we followed changes in T2 relaxation time in each subvolume. Moreover, we generated differential T2 maps by a voxelwise subtraction using the first T2 map under bevacizumab as reference. Results Visually segmented areas of tumor and edema did not differ in T2 relaxation times. Non-enhancing tumor volume did not decrease after commencement of bevacizumab treatment but strikingly increased at progression. Differential T2 maps clearly showed non-enhancing tumor progression in previously normal brain. T2 relaxation times decreased under bevacizumab without re-increasing at tumor progression. A decrease of <26 ms in the enhancing tumor following exposure to bevacizumab was associated with longer overall survival. Conclusions Combining quantitative MRI and tumor segmentation improves monitoring of glioblastoma patients under bevacizumab. The degree of change in T2 relaxation time under bevacizumab may be an early response parameter predictive of overall survival. The sustained decrease in T2 relaxation times toward values of healthy tissue masks progressive tumor on conventional T2-weighted images. Therefore, quantitative T2 relaxation times may detect non-enhancing progression better than conventional T2-weighted imaging. PMID:23925453
Cha, Jungwon; Farhangi, Mohammad Mehdi; Dunlap, Neal; Amini, Amir A
2018-01-01
We have developed a robust tool for performing volumetric and temporal analysis of nodules from respiratory gated four-dimensional (4D) CT. The method could prove useful in IMRT of lung cancer. We modified the conventional graph-cuts method by adding an adaptive shape prior as well as motion information within a signed distance function representation to permit more accurate and automated segmentation and tracking of lung nodules in 4D CT data. Active shape models (ASM) with signed distance function were used to capture the shape prior information, preventing unwanted surrounding tissues from becoming part of the segmented object. The optical flow method was used to estimate the local motion and to extend three-dimensional (3D) segmentation to 4D by warping a prior shape model through time. The algorithm has been applied to segmentation of well-circumscribed, vascularized, and juxtapleural lung nodules from respiratory gated CT data. In all cases, 4D segmentation and tracking for five phases of high-resolution CT data took approximately 10 min on a PC workstation with AMD Phenom II and 32 GB of memory. The method was trained based on 500 breath-held 3D CT data from the LIDC data base and was tested on 17 4D lung nodule CT datasets consisting of 85 volumetric frames. The validation tests resulted in an average Dice Similarity Coefficient (DSC) = 0.68 for all test data. An important by-product of the method is quantitative volume measurement from 4D CT from end-inspiration to end-expiration which will also have important diagnostic value. The algorithm performs robust segmentation of lung nodules from 4D CT data. Signed distance ASM provides the shape prior information which based on the iterative graph-cuts framework is adaptively refined to best fit the input data, preventing unwanted surrounding tissue from merging with the segmented object. © 2017 American Association of Physicists in Medicine.
In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images
NASA Astrophysics Data System (ADS)
Nillesen, M. M.; Lopata, R. G. P.; de Boode, W. P.; Gerrits, I. H.; Huisman, H. J.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.
2009-04-01
Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was validated quantitatively by comparing it with the CO values measured from the volume flow in the pulmonary artery. Relative bias varied between 0 and -17%, where the nominal accuracy of the flow meter is in the order of 10%. Assuming the CO measurements from the flow probe as a gold standard, excellent correlation (r = 0.99) was observed with the CO estimates obtained from image segmentation.
Multi-fractal texture features for brain tumor and edema segmentation
NASA Astrophysics Data System (ADS)
Reza, S.; Iftekharuddin, K. M.
2014-03-01
In this work, we propose a fully automatic brain tumor and edema segmentation technique in brain magnetic resonance (MR) images. Different brain tissues are characterized using the novel texture features such as piece-wise triangular prism surface area (PTPSA), multi-fractional Brownian motion (mBm) and Gabor-like textons, along with regular intensity and intensity difference features. Classical Random Forest (RF) classifier is used to formulate the segmentation task as classification of these features in multi-modal MRIs. The segmentation performance is compared with other state-of-art works using a publicly available dataset known as Brain Tumor Segmentation (BRATS) 2012 [1]. Quantitative evaluation is done using the online evaluation tool from Kitware/MIDAS website [2]. The results show that our segmentation performance is more consistent and, on the average, outperforms other state-of-the art works in both training and challenge cases in the BRATS competition.
Human body segmentation via data-driven graph cut.
Li, Shifeng; Lu, Huchuan; Shao, Xingqing
2014-11-01
Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.
Bergeest, Jan-Philip; Rohr, Karl
2012-10-01
In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.
An Algorithm to Automate Yeast Segmentation and Tracking
Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.
2013-01-01
Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484
Dynamic deformable models for 3D MRI heart segmentation
NASA Astrophysics Data System (ADS)
Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.
2002-05-01
Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.
Video-based noncooperative iris image segmentation.
Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig
2011-02-01
In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.
Quantitative test for concave aspheric surfaces using a Babinet compensator.
Saxena, A K
1979-08-15
A quantitative test for the evaluation of surface figures of concave aspheric surfaces using a Babinet compensator is reported. A theoretical estimate of the sensitivity is 0.002lambda for a minimum detectable phase change of 2 pi x 10(-3) rad over a segment length of 1.0 cm.
Ehlers, Justis P.; Wang, Kevin; Vasanji, Amit; Hu, Ming; Srivastava, Sunil K.
2017-01-01
Summary Ultra-widefield fluorescein angiography (UWFA) is an emerging imaging modality used to characterize pathology in the retinal vasculature such as microaneurysms (MA) and vascular leakage. Despites its potential value for diagnosis and disease surveillance, objective quantitative assessment of retinal pathology by UWFA is currently limited because it requires laborious manual segmentation by trained human graders. In this report, we describe a novel fully automated software platform, which segments MAs and leakage areas in native and dewarped UWFA images with retinal vascular disease. Comparison of the algorithm to human grader generated gold standards demonstrated significant strong correlations for MA and leakage areas (ICC=0.78-0.87 and ICC=0.70-0.86, respectively, p=2.1×10-7 to 3.5×10-10 and p=7.8×10-6 to 1.3×10-9, respectively). These results suggest the algorithm performs similarly to human graders in MA and leakage segmentation and may be of significant utility in clinical and research settings. PMID:28432113
A novel method for unsteady flow field segmentation based on stochastic similarity of direction
NASA Astrophysics Data System (ADS)
Omata, Noriyasu; Shirayama, Susumu
2018-04-01
Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.
Subcortical structure segmentation using probabilistic atlas priors
NASA Astrophysics Data System (ADS)
Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido
2007-03-01
The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Toda, S.; Ando, R.; Yamashina, T.; Inoue, H.; Sunarjo
2010-04-01
On 2007 March 6, an earthquake doublet occurred along the Sumatran fault, Indonesia. The epicentres were located near Padang Panjang, central Sumatra, Indonesia. The first earthquake, with a moment magnitude (Mw) of 6.4, occurred at 03:49 UTC and was followed two hours later (05:49 UTC) by an earthquake of similar size (Mw = 6.3). We studied the earthquake doublet by a waveform inversion analysis using data from a broadband seismograph network in Indonesia (JISNET). The focal mechanisms of the two earthquakes indicate almost identical right-lateral strike-slip faults, consistent with the geometry of the Sumatran fault. Both earthquakes nucleated below the northern end of Lake Singkarak, which is in a pull-apart basin between the Sumani and Sianok segments of the Sumatran fault system, but the earthquakes ruptured different fault segments. The first earthquake occurred along the southern Sumani segment and its rupture propagated southeastward, whereas the second one ruptured the northern Sianok segment northwestward. Along these fault segments, earthquake doublets, in which the two adjacent fault segments rupture one after the other, have occurred repeatedly. We investigated the state of stress at a segment boundary of a fault system based on the Coulomb stress changes. The stress on faults increases during interseismic periods and is released by faulting. At a segment boundary, on the other hand, the stress increases both interseismically and coseismically, and may not be released unless new fractures are created. Accordingly, ruptures may tend to initiate at a pull-apart basin. When an earthquake occurs on one of the fault segments, the stress increases coseismically around the basin. The stress changes caused by that earthquake may trigger a rupture on the other segment after a short time interval. We also examined the mechanism of the delayed rupture based on a theory of a fluid-saturated poroelastic medium and dynamic rupture simulations incorporating a rheological velocity hardening effect. These models of the delayed rupture can qualitatively explain the observations, but further studies, especially based on the rheological effect, are required for quantitative studies.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.
2012-03-01
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.
Phenotypic characterization of glioblastoma identified through shape descriptors
NASA Astrophysics Data System (ADS)
Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew
2016-03-01
This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.
Lee, Hong Kai; Lee, Chun Kiat; Loh, Tze Ping; Tang, Julian Wei-Tze; Chiu, Lily; Tambyah, Paul A; Sethi, Sunil K; Koay, Evelyn Siew-Chuan
2010-09-01
With the relative global lack of immunity to the pandemic influenza A/H1N1/2009 virus that emerged in April 2009 as well as the sustained susceptibility to infection, rapid and accurate diagnostic assays are essential to detect this novel influenza A variant. Among the molecular diagnostic methods that have been developed to date, most are in tandem monoplex assays targeting either different regions of a single viral gene segment or different viral gene segments. We describe a dual-gene (duplex) quantitative real-time RT-PCR method selectively targeting pandemic influenza A/H1N1/2009. The assay design includes a primer-probe set specific to only the hemagglutinin (HA) gene of this novel influenza A variant and a second set capable of detecting the nucleoprotein (NP) gene of all swine-origin influenza A virus. In silico analysis of the specific HA oligonucleotide sequence used in the assay showed that it targeted only the swine-origin pandemic strain; there was also no cross-reactivity against a wide spectrum of noninfluenza respiratory viruses. The assay has a diagnostic sensitivity and specificity of 97.7% and 100%, respectively, a lower detection limit of 50 viral gene copies/PCR, and can be adapted to either a qualitative or quantitative mode. It was first applied to 3512 patients with influenza-like illnesses at a tertiary hospital in Singapore, during the containment phase of the pandemic (May to July 2009).
Nielsen, Flemming K; Egund, Niels; Peters, David; Jurik, Anne Grethe
2014-12-20
Longitudinal assessment of bone marrow lesions (BMLs) in knee osteoarthritis (KOA) by MRI is usually performed using semi-quantitative grading methods. Quantitative segmentation methods may be more sensitive to detect change over time. The purpose of this study was to evaluate and compare the validity and sensitivity to detect changes of two quantitative MR segmentation methods for measuring BMLs in KOA, one computer assisted (CAS) and one manual (MS) method. Twenty-two patients with KOA confined to the medial femoro-tibial compartment obtained MRI at baseline and follow-up (median 334 days in between). STIR, T1 and fat saturated T1 post-contrast sequences were obtained using a 1.5 T system. The 44 sagittal STIR sequences were assessed independently by two readers for quantification of BML. The signal intensities (SIs) of the normal bone marrow in the lateral femoral condyles and tibial plateaus were used as threshold values. The volume of bone marrow with SIs exceeding the threshold values (BML) was measured in the medial femoral condyle and tibial plateau and related to the total volume of the condyles/plateaus.The 95% limits of agreement at baseline were used to determine the sensitivity to change. The mean threshold values of CAS and MS were almost identical but the absolute and relative BML volumes differed being 1319 mm3/10% and 1828 mm3/15% in the femur and 941 mm3/7% and 2097 mm3/18% in the tibia using CAS and MS, respectively. The BML volumes obtained by CAS and MS were significantly correlated but the tissue changes measured were different. The volume of voxels exceeding the threshold values was measured by CAS whereas MS included intervening voxels with normal SI.The 95% limits of agreement were narrower by CAS than by MS; a significant change of relative BML by CAS was outside the limits of -2.0%-4.7% whereas the limits by MS were -6.9%-8.2%. The BML changed significantly in 13 knees using CAS and in 10 knees by MS. CAS was a reliable method for measuring BML and more sensitive to detect changes over time than MS. The BML volumes measured by the two methods differed but were significantly correlated.
NASA Astrophysics Data System (ADS)
Uchidate, M.
2018-09-01
In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.
Qazi, Arish A; Pekar, Vladimir; Kim, John; Xie, Jason; Breen, Stephen L; Jaffray, David A
2011-11-01
Intensity modulated radiation therapy (IMRT) allows greater control over dose distribution, which leads to a decrease in radiation related toxicity. IMRT, however, requires precise and accurate delineation of the organs at risk and target volumes. Manual delineation is tedious and suffers from both interobserver and intraobserver variability. State of the art auto-segmentation methods are either atlas-based, model-based or hybrid however, robust fully automated segmentation is often difficult due to the insufficient discriminative information provided by standard medical imaging modalities for certain tissue types. In this paper, the authors present a fully automated hybrid approach which combines deformable registration with the model-based approach to accurately segment normal and target tissues from head and neck CT images. The segmentation process starts by using an average atlas to reliably identify salient landmarks in the patient image. The relationship between these landmarks and the reference dataset serves to guide a deformable registration algorithm, which allows for a close initialization of a set of organ-specific deformable models in the patient image, ensuring their robust adaptation to the boundaries of the structures. Finally, the models are automatically fine adjusted by our boundary refinement approach which attempts to model the uncertainty in model adaptation using a probabilistic mask. This uncertainty is subsequently resolved by voxel classification based on local low-level organ-specific features. To quantitatively evaluate the method, they auto-segment several organs at risk and target tissues from 10 head and neck CT images. They compare the segmentations to the manual delineations outlined by the expert. The evaluation is carried out by estimating two common quantitative measures on 10 datasets: volume overlap fraction or the Dice similarity coefficient (DSC), and a geometrical metric, the median symmetric Hausdorff distance (HD), which is evaluated slice-wise. They achieve an average overlap of 93% for the mandible, 91% for the brainstem, 83% for the parotids, 83% for the submandibular glands, and 74% for the lymph node levels. Our automated segmentation framework is able to segment anatomy in the head and neck region with high accuracy within a clinically-acceptable segmentation time.
Automatic segmentation of lumbar vertebrae in CT images
NASA Astrophysics Data System (ADS)
Kulkarni, Amruta; Raina, Akshita; Sharifi Sarabi, Mona; Ahn, Christine S.; Babayan, Diana; Gaonkar, Bilwaj; Macyszyn, Luke; Raghavendra, Cauligi
2017-03-01
Lower back pain is one of the most prevalent disorders in the developed/developing world. However, its etiology is poorly understood and treatment is often determined subjectively. In order to quantitatively study the emergence and evolution of back pain, it is necessary to develop consistently measurable markers for pathology. Imaging based measures offer one solution to this problem. The development of imaging based on quantitative biomarkers for the lower back necessitates automated techniques to acquire this data. While the problem of segmenting lumbar vertebrae has been addressed repeatedly in literature, the associated problem of computing relevant biomarkers on the basis of the segmentation has not been addressed thoroughly. In this paper, we propose a Random-Forest based approach that learns to segment vertebral bodies in CT images followed by a biomarker evaluation framework that extracts vertebral heights and widths from the segmentations obtained. Our dataset consists of 15 CT sagittal scans obtained from General Electric Healthcare. Our main approach is divided into three parts: the first stage is image pre-processing which is used to correct for variations in illumination across all the images followed by preparing the foreground and background objects from images; the next stage is Machine Learning using Random-Forests, which distinguishes the interest-point vectors between foreground or background; and the last step is image post-processing, which is crucial to refine the results of classifier. The Dice coefficient was used as a statistical validation metric to evaluate the performance of our segmentations with an average value of 0.725 for our dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M; Woo, B; Kim, J
Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automaticallymore » from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.« less
NASA Astrophysics Data System (ADS)
Tang, Xiaoying; Kutten, Kwame; Ceritoglu, Can; Mori, Susumu; Miller, Michael I.
2015-03-01
In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels - the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term "fast MALF". The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.
Lee, Hyungwoo; Kang, Kyung Eun; Chung, Hyewon; Kim, Hyung Chan
2018-04-12
To evaluate an automated segmentation algorithm with a convolutional neural network (CNN) to quantify and detect intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), and subretinal hyperreflective material (SHRM) through analyses of spectral domain optical coherence tomography (SD-OCT) images from patients with neovascular age-related macular degeneration (nAMD). Reliability and validity analysis of a diagnostic tool. We constructed a dataset including 930 B-scans from 93 eyes of 93 patients with nAMD. A CNN-based deep neural network was trained using 11550 augmented images derived from 550 B-scans. The performance of the trained network was evaluated using a validation set including 140 B-scans and a test set of 240 B-scans. The Dice coefficient, positive predictive value (PPV), sensitivity, relative area difference (RAD), and intraclass correlation coefficient (ICC) were used to evaluate segmentation and detection performance. Good agreement was observed for both segmentation and detection of lesions between the trained network and clinicians. The Dice coefficients for segmentation of IRF, SRF, SHRM, and PED were 0.78, 0.82, 0.75, and 0.80, respectively; the PPVs were 0.79, 0.80, 0.75, and 0.80, respectively; and the sensitivities were 0.77, 0.84, 0.73, and 0.81, respectively. The RADs were -4.32%, -10.29%, 4.13%, and 0.34%, respectively, and the ICCs were 0.98, 0.98, 0.97, and 0.98, respectively. All lesions were detected with high PPVs (range 0.94-0.99) and sensitivities (range 0.97-0.99). A CNN-based network provides clinicians with quantitative data regarding nAMD through automatic segmentation and detection of pathological lesions, including IRF, SRF, PED, and SHRM. Copyright © 2018 Elsevier Inc. All rights reserved.
A novel measure and significance testing in data analysis of cell image segmentation.
Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L
2017-03-14
Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.
NASA Astrophysics Data System (ADS)
Dangi, Shusil; Linte, Cristian A.
2017-03-01
Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Wiemker, Rafael; Barschdorf, Hans; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; Schadewaldt, Nicole; Dharaiya, Ekta
2010-03-01
Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations. We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality. For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy on a per-lobe basis.
Kazerooni, Ella A.; Lynch, David A.; Liu, Lyrica X.; Murray, Susan; Curtis, Jeffrey L.; Criner, Gerard J.; Kim, Victor; Bowler, Russell P.; Hanania, Nicola A.; Anzueto, Antonio R.; Make, Barry J.; Hokanson, John E.; Crapo, James D.; Silverman, Edwin K.; Martinez, Fernando J.; Washko, George R.
2011-01-01
Purpose: To test the hypothesis—given the increasing emphasis on quantitative computed tomographic (CT) phenotypes of chronic obstructive pulmonary disease (COPD)—that a relationship exists between COPD exacerbation frequency and quantitative CT measures of emphysema and airway disease. Materials and Methods: This research protocol was approved by the institutional review board of each participating institution, and all participants provided written informed consent. One thousand two subjects who were enrolled in the COPDGene Study and met the GOLD (Global Initiative for Chronic Obstructive Lung Disease) criteria for COPD with quantitative CT analysis were included. Total lung emphysema percentage was measured by using the attenuation mask technique with a −950-HU threshold. An automated program measured the mean wall thickness and mean wall area percentage in six segmental bronchi. The frequency of COPD exacerbation in the prior year was determined by using a questionnaire. Statistical analysis was performed to examine the relationship of exacerbation frequency with lung function and quantitative CT measurements. Results: In a multivariate analysis adjusted for lung function, bronchial wall thickness and total lung emphysema percentage were associated with COPD exacerbation frequency. Each 1-mm increase in bronchial wall thickness was associated with a 1.84-fold increase in annual exacerbation rate (P = .004). For patients with 35% or greater total emphysema, each 5% increase in emphysema was associated with a 1.18-fold increase in this rate (P = .047). Conclusion: Greater lung emphysema and airway wall thickness were associated with COPD exacerbations, independent of the severity of airflow obstruction. Quantitative CT can help identify subgroups of patients with COPD who experience exacerbations for targeted research and therapy development for individual phenotypes. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11110173/-/DC1 PMID:21788524
Pinto, A.J.W.; de Amorim, I.F.G.; Pinheiro, L.J.; Madeira, I.M.V.M.; Souza, C.C.; Chiarini-Garcia, H.; Caliari, M.V.
2015-01-01
In canine visceral leishmaniasis a diffuse chronic inflammatory exudate and an intense parasite load throughout the gastrointestinal tract (GIT) has been previously reported. However, these studies did not allow a properly description of canine cellular morphology details. The aim of our study was to better characterize these cells in carrying out a qualitative and quantitative histological study in the gastrointestinal tract of dogs naturally infected with Leishmania infantum by examining gut tissues embedded in glycol methacrylate. Twelve infected adult dogs were classified in asymptomatic and symptomatic. Five uninfected dogs were used as controls. After necropsy, three samples of each gut segment, including oesophagus, stomach, duodenum, jejunum, ileum, cecum, colon, and rectum were collected and fixed in Carnoy’s solution for glycol methacrylate protocols. Sections were stained with hematoxylin-eosin, toluidine blue borate, and periodic acid-Schiff stain. Leishmania amastigotes were detected by immunohistochemistry employed in both glycol methacrylate and paraffin embedded tissues. The quantitative histological analysis showed higher numbers of plasma cells, lymphocytes and macrophages in lamina propria of all segments of GIT of infected dogs compared with controls. The parasite load was more intense and cecum and colon, independently of the clinical status of these dogs. Importantly, glycol methacrylate embedded tissue stained with toluidine blue borate clearly revealed mast cell morphology, even after mast cell degranulation. Infected dogs showed lower numbers of mast cells in all gut segments than controls. Despite the glycol methacrylate (GMA) protocol requires more attention and care than the conventional paraffin processing, this embedding procedure proved to be especially suitable for the present histological study, where it allowed to preserve and observe cell morphology in fine detail. PMID:26708180
Comparison of 3% sorbitol vs psyllium fibre as oral contrast agents in MR enterography.
Saini, Sidharth; Colak, Errol; Anthwal, Shalini; Vlachou, Paraskevi A; Raikhlin, Antony; Kirpalani, Anish
2014-10-01
To compare the degree of small bowel distension achieved by 3% sorbitol, a high osmolarity solution, and a psyllium-based bulk fibre as oral contrast agents (OCAs) in MR enterography (MRE). This retrospective study was approved by our institutional review board. A total of 45 consecutive normal MRE examinations (sorbitol, n = 20; psyllium, n = 25) were reviewed. The patients received either 1.5 l of 3% sorbitol or 2 l of 1.6 g kg(-1) psyllium prior to imaging. Quantitative small bowel distension measurements were taken in five segments: proximal jejunum, distal jejunum, proximal ileum, distal ileum and terminal ileum by two independent radiologists. Distension in these five segments was also qualitatively graded from 0 (very poor) to 4 (excellent) by two additional independent radiologists. Statistical analysis comparing the groups and assessing agreement included intraclass coefficients, Student's t-test and Mann-Whitney U-test. Small bowel distension was not significantly different in any of the five small bowel segments between the use of sorbitol and psyllium as OCAs in both the qualitative (p = 0.338-0.908) and quantitative assessments (p = 0.083-0.856). The mean bowel distension achieved was 20.1 ± 2.2 mm for sorbitol and 19.8 ± 2.5 mm for psyllium (p = 0.722). Visualization of the ileum was good or excellent in 65% of the examinations in both groups. Sorbitol and psyllium are not significantly different at distending the small bowel and both may be used as OCAs for MRE studies. This is the first study to directly compare the degree of distension in MRE between these two common, readily available and inexpensive OCAs.
Real-time segmentation of burst suppression patterns in critical care EEG monitoring
Westover, M. Brandon; Shafi, Mouhsin M.; Ching, ShiNung; Chemali, Jessica J.; Purdon, Patrick L.; Cash, Sydney S.; Brown, Emery N.
2014-01-01
Objective Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. Methods A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Results Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Conclusions Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Significance Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. PMID:23891828
Real-time segmentation of burst suppression patterns in critical care EEG monitoring.
Brandon Westover, M; Shafi, Mouhsin M; Ching, Shinung; Chemali, Jessica J; Purdon, Patrick L; Cash, Sydney S; Brown, Emery N
2013-09-30
Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. Copyright © 2013 Elsevier B.V. All rights reserved.
Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model
NASA Astrophysics Data System (ADS)
Li, X. L.; Zhao, Q. H.; Li, Y.
2017-09-01
Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.
3D robust Chan-Vese model for industrial computed tomography volume data segmentation
NASA Astrophysics Data System (ADS)
Liu, Linghui; Zeng, Li; Luan, Xiao
2013-11-01
Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.
Graphical Methods for Quantifying Macromolecules through Bright Field Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D.
Bright ?eld imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition, and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to Flbroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color imagesmore » into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quanti?ed by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of samplepreparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions, and (iv) has a superior computing performance« less
Wen, Quan
2014-01-01
Membrane-bound macromolecules play an important role in tissue architecture and cell-cell communication, and is regulated by almost one-third of the genome. At the optical scale, one group of membrane proteins expresses themselves as linear structures along the cell surface boundaries, while others are sequestered; and this paper targets the former group. Segmentation of these membrane proteins on a cell-by-cell basis enables the quantitative assessment of localization for comparative analysis. However, such membrane proteins typically lack continuity, and their intensity distributions are often very heterogeneous; moreover, nuclei can form large clump, which further impedes the quantification of membrane signals on a cell-by-cell basis. To tackle these problems, we introduce a three-step process to (i) regularize the membrane signal through iterative tangential voting, (ii) constrain the location of surface proteins by nuclear features, where clumps of nuclei are segmented through a delaunay triangulation approach, and (iii) assign membrane-bound macromolecules to individual cells through an application of multi-phase geodesic level-set. We have validated our method using both synthetic data and a dataset of 200 images, and are able to demonstrate the efficacy of our approach with superior performance. PMID:25530633
NASA Astrophysics Data System (ADS)
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey
2012-12-01
This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.
Expression and localization of c-ros oncogene along the human excurrent duct.
Légaré, Christine; Sullivan, Robert
2004-09-01
Compared to other animals, the anatomy of the human epididymis appears unusual. The caput epididymis is composed mostly of efferent ducts with an undefined initial segment. The present study investigates the regionalization of c-ros in human epididymis by real-time quantitative RT-PCR, in situ hybridization and immunohistochemistry studies. C-ros gene encodes a receptor-type protein tyrosine kinase that is expressed in adult mice exclusively in the epithelial cells of the initial segment of the epididymis. Transgenic mice targeted for the c-ros gene lack the initial segment of the epididymis and are infertile. Real-time PCR analysis showed that c-ros mRNA is expressed all along the human epididymis with the exception of the proximal caput epididymidis, where c-ros transcript was undetectable. In situ hydridization revealed that c-ros transcript was strongly expressed by principal cells and to a lower level by basal cells. Immunohistochemical studies showed that c-ros protein distribution was similar to the transcript. These results showed that c-ros expression in the human epididymis differs from that in mice suggesting that the unusual morphology of the human epididymis may reflect species differences in gene expression along the excurrent duct.
NASA Astrophysics Data System (ADS)
Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.
2006-03-01
Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.
Zhou, Yong; Dong, Guichun; Tao, Yajun; Chen, Chen; Yang, Bin; Wu, Yue; Yang, Zefeng; Liang, Guohua; Wang, Baohe; Wang, Yulong
2016-01-01
Identification of quantitative trait loci (QTLs) associated with rice root morphology provides useful information for avoiding drought stress and maintaining yield production under the irrigation condition. In this study, a set of chromosome segment substitution lines derived from 9311 as the recipient and Nipponbare as donor, were used to analysis root morphology. By combining the resequencing-based bin-map with a multiple linear regression analysis, QTL identification was conducted on root number (RN), total root length (TRL), root dry weight (RDW), maximum root length (MRL), root thickness (RTH), total absorption area (TAA) and root vitality (RV), using the CSSL population grown under hydroponic conditions. A total of thirty-eight QTLs were identified: six for TRL, six for RDW, eight for the MRL, four for RTH, seven for RN, two for TAA, and five for RV. Phenotypic effect variance explained by these QTLs ranged from 2.23% to 37.08%, and four single QTLs had more than 10% phenotypic explanations on three root traits. We also detected the correlations between grain yield (GY) and root traits, and found that TRL, RTH and MRL had significantly positive correlations with GY. However, TRL, RDW and MRL had significantly positive correlations with biomass yield (BY). Several QTLs identified in our population were co-localized with some loci for grain yield or biomass. This information may be immediately exploited for improving rice water and fertilizer use efficiency for molecular breeding of root system architectures.
Kéchichian, Razmig; Valette, Sébastien; Desvignes, Michel; Prost, Rémy
2013-11-01
We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
Li, Yun-He; Zhang, Hong-Na; Wu, Qing-Song; Muday, Gloria K
2017-06-01
A total of 74,745 unigenes were generated and 1975 DEGs were identified. Candidate genes that may be involved in the adventitious root formation of mango cotyledon segment were revealed. Adventitious root formation is a crucial step in plant vegetative propagation, but the molecular mechanism of adventitious root formation remains unclear. Adventitious roots formed only at the proximal cut surface (PCS) of mango cotyledon segments, whereas no roots were formed on the opposite, distal cut surface (DCS). To identify the transcript abundance changes linked to adventitious root development, RNA was isolated from PCS and DCS at 0, 4 and 7 days after culture, respectively. Illumina sequencing of libraries generated from these samples yielded 62.36 Gb high-quality reads that were assembled into 74,745 unigenes with an average sequence length of 807 base pairs, and 33,252 of the assembled unigenes at least had homologs in one of the public databases. Comparative analysis of these transcriptome databases revealed that between the different time points at PCS there were 1966 differentially expressed genes (DEGs), while there were only 51 DEGs for the PCS vs. DCS when time-matched samples were compared. Of these DEGs, 1636 were assigned to gene ontology (GO) classes, the majority of that was involved in cellular processes, metabolic processes and single-organism processes. Candidate genes that may be involved in the adventitious root formation of mango cotyledon segment are predicted to encode polar auxin transport carriers, auxin-regulated proteins, cell wall remodeling enzymes and ethylene-related proteins. In order to validate RNA-sequencing results, we further analyzed the expression profiles of 20 genes by quantitative real-time PCR. This study expands the transcriptome information for Mangifera indica and identifies candidate genes involved in adventitious root formation in cotyledon segments of mango.
Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred
2014-01-01
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Windham, Joe P.; Peck, Donald J.
1997-04-01
This paper presents development and performance evaluation of an MRI feature space method. The method is useful for: identification of tissue types; segmentation of tissues; and quantitative measurements on tissues, to obtain information that can be used in decision making (diagnosis, treatment planning, and evaluation of treatment). The steps of the work accomplished are as follows: (1) Four T2-weighted and two T1-weighted images (before and after injection of Gadolinium) were acquired for ten tumor patients. (2) Images were analyed by two image analysts according to the following algorithm. The intracranial brain tissues were segmented from the scalp and background. The additive noise was suppressed using a multi-dimensional non-linear edge- preserving filter which preserves partial volume information on average. Image nonuniformities were corrected using a modified lowpass filtering approach. The resulting images were used to generate and visualize an optimal feature space. Cluster centers were identified on the feature space. Then images were segmented into normal tissues and different zones of the tumor. (3) Biopsy samples were extracted from each patient and were subsequently analyzed by the pathology laboratory. (4) Image analysis results were compared to each other and to the biopsy results. Pre- and post-surgery feature spaces were also compared. The proposed algorithm made it possible to visualize the MRI feature space and to segment the image. In all cases, the operators were able to find clusters for normal and abnormal tissues. Also, clusters for different zones of the tumor were found. Based on the clusters marked for each zone, the method successfully segmented the image into normal tissues (white matter, gray matter, and CSF) and different zones of the lesion (tumor, cyst, edema, radiation necrosis, necrotic core, and infiltrated tumor). The results agreed with those obtained from the biopsy samples. Comparison of pre- to post-surgery and radiation feature spaces confirmed that the tumor was not present in the second study but radiation necrosis was generated as a result of radiation.
Le Marié, Chantal; Kirchgessner, Norbert; Marschall, Daniela; Walter, Achim; Hund, Andreas
2014-01-01
A quantitative characterization of root system architecture is currently being attempted for various reasons. Non-destructive, rapid analyses of root system architecture are difficult to perform due to the hidden nature of the root. Hence, improved methods to measure root architecture are necessary to support knowledge-based plant breeding and to analyse root growth responses to environmental changes. Here, we report on the development of a novel method to reveal growth and architecture of maize root systems. The method is based on the cultivation of different root types within several layers of two-dimensional, large (50 × 60 cm) plates (rhizoslides). A central plexiglass screen stabilizes the system and is covered on both sides with germination paper providing water and nutrients for the developing root, followed by a transparent cover foil to prevent the roots from falling dry and to stabilize the system. The embryonic roots grow hidden between a Plexiglas surface and paper, whereas crown roots grow visible between paper and the transparent cover. Long cultivation with good image quality up to 20 days (four fully developed leaves) was enhanced by suppressing fungi with a fungicide. Based on hyperspectral microscopy imaging, the quality of different germination papers was tested and three provided sufficient contrast to distinguish between roots and background (segmentation). Illumination, image acquisition and segmentation were optimised to facilitate efficient root image analysis. Several software packages were evaluated with regard to their precision and the time investment needed to measure root system architecture. The software 'Smart Root' allowed precise evaluation of root development but needed substantial user interference. 'GiaRoots' provided the best segmentation method for batch processing in combination with a good analysis of global root characteristics but overestimated root length due to thinning artefacts. 'WhinRhizo' offered the most rapid and precise evaluation of root lengths in diameter classes, but had weaknesses with respect to image segmentation and analysis of root system architecture. A new technique has been established for non-destructive root growth studies and quantification of architectural traits beyond seedlings stages. However, automation of the scanning process and appropriate software remains the bottleneck for high throughput analysis.
Myocardial wall thickening from gated magnetic resonance images using Laplace's equation
NASA Astrophysics Data System (ADS)
Prasad, M.; Ramesh, A.; Kavanagh, P.; Gerlach, J.; Germano, G.; Berman, D. S.; Slomka, P. J.
2009-02-01
The aim of our work is to present a robust 3D automated method for measuring regional myocardial thickening using cardiac magnetic resonance imaging (MRI) based on Laplace's equation. Multiple slices of the myocardium in short-axis orientation at end-diastolic and end-systolic phases were considered for this analysis. Automatically assigned 3D epicardial and endocardial boundaries were fitted to short-axis and long axis slices corrected for breathold related misregistration, and final boundaries were edited by a cardiologist if required. Myocardial thickness was quantified at the two cardiac phases by computing the distances between the myocardial boundaries over the entire volume using Laplace's equation. The distance between the surfaces was found by computing normalized gradients that form a vector field. The vector fields represent tangent vectors along field lines connecting both boundaries. 3D thickening measurements were transformed into polar map representation and 17-segment model (American Heart Association) regional thickening values were derived. The thickening results were then compared with standard 17-segment 6-point visual scoring of wall motion/wall thickening (0=normal; 5=greatest abnormality) performed by a consensus of two experienced imaging cardiologists. Preliminary results on eight subjects indicated a strong negative correlation (r=-0.8, p<0.0001) between the average thickening obtained using Laplace and the summed segmental visual scores. Additionally, quantitative ejection fraction measurements also correlated well with average thickening scores (r=0.72, p<0.0001). For segmental analysis, we obtained an overall correlation of -0.55 (p<0.0001) with higher agreement along the mid and apical regions (r=-0.6). In conclusion 3D Laplace transform can be used to quantify myocardial thickening in 3D.
Periodic sequence of stabilized wave segments in an excitable medium
NASA Astrophysics Data System (ADS)
Zykov, V. S.; Bodenschatz, E.
2018-03-01
Numerical computations show that a stabilization of a periodic sequence of wave segments propagating through an excitable medium is possible only in a restricted domain within the parameter space. By application of a free-boundary approach, we demonstrate that at the boundary of this domain the parameter H introduced in our Rapid Communication is constant. We show also that the discovered parameter predetermines the propagation velocity and the shape of the wave segments. The predictions of the free-boundary approach are in good quantitative agreement with results from numerical reaction-diffusion simulations performed on the modified FitzHugh-Nagumo model.
Kaakinen, M; Huttunen, S; Paavolainen, L; Marjomäki, V; Heikkilä, J; Eklund, L
2014-01-01
Phase-contrast illumination is simple and most commonly used microscopic method to observe nonstained living cells. Automatic cell segmentation and motion analysis provide tools to analyze single cell motility in large cell populations. However, the challenge is to find a sophisticated method that is sufficiently accurate to generate reliable results, robust to function under the wide range of illumination conditions encountered in phase-contrast microscopy, and also computationally light for efficient analysis of large number of cells and image frames. To develop better automatic tools for analysis of low magnification phase-contrast images in time-lapse cell migration movies, we investigated the performance of cell segmentation method that is based on the intrinsic properties of maximally stable extremal regions (MSER). MSER was found to be reliable and effective in a wide range of experimental conditions. When compared to the commonly used segmentation approaches, MSER required negligible preoptimization steps thus dramatically reducing the computation time. To analyze cell migration characteristics in time-lapse movies, the MSER-based automatic cell detection was accompanied by a Kalman filter multiobject tracker that efficiently tracked individual cells even in confluent cell populations. This allowed quantitative cell motion analysis resulting in accurate measurements of the migration magnitude and direction of individual cells, as well as characteristics of collective migration of cell groups. Our results demonstrate that MSER accompanied by temporal data association is a powerful tool for accurate and reliable analysis of the dynamic behaviour of cells in phase-contrast image sequences. These techniques tolerate varying and nonoptimal imaging conditions and due to their relatively light computational requirements they should help to resolve problems in computationally demanding and often time-consuming large-scale dynamical analysis of cultured cells. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Fully automated segmentation of the pectoralis muscle boundary in breast MR images
NASA Astrophysics Data System (ADS)
Wang, Lei; Filippatos, Konstantinos; Friman, Ola; Hahn, Horst K.
2011-03-01
Dynamic Contrast Enhanced MRI (DCE-MRI) of the breast is emerging as a novel tool for early tumor detection and diagnosis. The segmentation of the structures in breast DCE-MR images, such as the nipple, the breast-air boundary and the pectoralis muscle, serves as a fundamental step for further computer assisted diagnosis (CAD) applications, e.g. breast density analysis. Moreover, the previous clinical studies show that the distance between the posterior breast lesions and the pectoralis muscle can be used to assess the extent of the disease. To enable automatic quantification of the distance from a breast tumor to the pectoralis muscle, a precise delineation of the pectoralis muscle boundary is required. We present a fully automatic segmentation method based on the second derivative information represented by the Hessian matrix. The voxels proximal to the pectoralis muscle boundary exhibit roughly the same Eigen value patterns as a sheet-like object in 3D, which can be enhanced and segmented by a Hessian-based sheetness filter. A vector-based connected component filter is then utilized such that only the pectoralis muscle is preserved by extracting the largest connected component. The proposed method was evaluated quantitatively with a test data set which includes 30 breast MR images by measuring the average distances between the segmented boundary and the annotated surfaces in two ground truth sets, and the statistics showed that the mean distance was 1.434 mm with the standard deviation of 0.4661 mm, which shows great potential for integration of the approach in the clinical routine.
Computerized tongue image segmentation via the double geo-vector flow
2014-01-01
Background Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Methods Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. Results The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. Conclusions By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation. PMID:24507094
Computerized tongue image segmentation via the double geo-vector flow.
Shi, Miao-Jing; Li, Guo-Zheng; Li, Fu-Feng; Xu, Chao
2014-02-08
Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation.
Convolutional encoder-decoder for breast mass segmentation in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Zhang, Jun; Ghate, Sujata V.; Grimm, Lars J.; Saha, Ashirbani; Cain, Elizabeth Hope; Zhu, Zhe; Mazurowski, Maciej A.
2018-02-01
Digital breast tomosynthesis (DBT) is a relatively new modality for breast imaging that can provide detailed assessment of dense tissue within the breast. In the domains of cancer diagnosis, radiogenomics, and resident education, it is important to accurately segment breast masses. However, breast mass segmentation is a very challenging task, since mass regions have low contrast difference between their neighboring tissues. Notably, the task might become more difficult in cases that were assigned BI-RADS 0 category since this category includes many lesions that are of low conspicuity and locations that were deemed to be overlapping normal tissue upon further imaging and were not sent to biopsy. Segmentation of such lesions is of particular importance in the domain of reader performance analysis and education. In this paper, we propose a novel deep learning-based method for segmentation of BI-RADS 0 lesions in DBT. The key components of our framework are an encoding path for local-to-global feature extraction, and a decoding patch to expand the images. To address the issue of limited training data, in the training stage, we propose to sample patches not only in mass regions but also in non-mass regions. We utilize a Dice-like loss function in the proposed network to alleviate the class-imbalance problem. The preliminary results on 40 subjects show promise of our method. In addition to quantitative evaluation of the method, we present a visualization of the results that demonstrate both the performance of the algorithm as well as the difficulty of the task at hand.
Ventriculogram segmentation using boosted decision trees
NASA Astrophysics Data System (ADS)
McDonald, John A.; Sheehan, Florence H.
2004-05-01
Left ventricular status, reflected in ejection fraction or end systolic volume, is a powerful prognostic indicator in heart disease. Quantitative analysis of these and other parameters from ventriculograms (cine xrays of the left ventricle) is infrequently performed due to the labor required for manual segmentation. None of the many methods developed for automated segmentation has achieved clinical acceptance. We present a method for semi-automatic segmentation of ventriculograms based on a very accurate two-stage boosted decision-tree pixel classifier. The classifier determines which pixels are inside the ventricle at key ED (end-diastole) and ES (end-systole) frames. The test misclassification rate is about 1%. The classifier is semi-automatic, requiring a user to select 3 points in each frame: the endpoints of the aortic valve and the apex. The first classifier stage is 2 boosted decision-trees, trained using features such as gray-level statistics (e.g. median brightness) and image geometry (e.g. coordinates relative to user supplied 3 points). Second stage classifiers are trained using the same features as the first, plus the output of the first stage. Border pixels are determined from the segmented images using dilation and erosion. A curve is then fit to the border pixels, minimizing a penalty function that trades off fidelity to the border pixels with smoothness. ED and ES volumes, and ejection fraction are estimated from border curves using standard area-length formulas. On independent test data, the differences between automatic and manual volumes (and ejection fractions) are similar in size to the differences between two human observers.
Characteristics of the Epididymal Luminal Environment Responsible for Sperm Maturation and Storage
Zhou, Wei; De Iuliis, Geoffry N.; Dun, Matthew D.; Nixon, Brett
2018-01-01
The testicular spermatozoa of all mammalian species are considered functionally immature owing to their inability to swim in a progressive manner and engage in productive interactions with the cumulus–oocyte complex. The ability to express these key functional attributes develops progressively during the cells’ descent through the epididymis, a highly specialized ductal system that forms an integral part of the male reproductive tract. The functional maturation of the spermatozoon is achieved via continuous interactions with the epididymal luminal microenvironment and remarkably, occurs in the complete absence of de novo gene transcription or protein translation. Compositional analysis of the luminal fluids collected from the epididymis of a variety of species has revealed the complexity of this milieu, with a diversity of inorganic ions, proteins, and small non-coding RNA transcripts having been identified to date. Notably, both the quantitative and qualitative profile of each of these different luminal elements display substantial segment-to-segment variation, which in turn contribute to the regionalized functionality of this long tubule. Thus, spermatozoa acquire functional maturity in the proximal segments before being stored in a quiescent state in the distal segment in preparation for ejaculation. Such marked division of labor is achieved via the combined secretory and absorptive activity of the epithelial cells lining each segment. Here, we review our current understanding of the molecular mechanisms that exert influence over the unique intraluminal environment of the epididymis, with a particular focus on vesicle-dependent mechanisms that facilitate intercellular communication between the epididymal soma and maturing sperm cell population. PMID:29541061
NASA Astrophysics Data System (ADS)
Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens
2018-06-01
Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.
Lim, Hyun-ju; Weinheimer, Oliver; Wielpütz, Mark O.; Dinkel, Julien; Hielscher, Thomas; Gompelmann, Daniela; Kauczor, Hans-Ulrich; Heussel, Claus Peter
2016-01-01
Objectives Surgical or bronchoscopic lung volume reduction (BLVR) techniques can be beneficial for heterogeneous emphysema. Post-processing software tools for lobar emphysema quantification are useful for patient and target lobe selection, treatment planning and post-interventional follow-up. We aimed to evaluate the inter-software variability of emphysema quantification using fully automated lobar segmentation prototypes. Material and Methods 66 patients with moderate to severe COPD who underwent CT for planning of BLVR were included. Emphysema quantification was performed using 2 modified versions of in-house software (without and with prototype advanced lung vessel segmentation; programs 1 [YACTA v.2.3.0.2] and 2 [YACTA v.2.4.3.1]), as well as 1 commercial program 3 [Pulmo3D VA30A_HF2] and 1 pre-commercial prototype 4 [CT COPD ISP ver7.0]). The following parameters were computed for each segmented anatomical lung lobe and the whole lung: lobar volume (LV), mean lobar density (MLD), 15th percentile of lobar density (15th), emphysema volume (EV) and emphysema index (EI). Bland-Altman analysis (limits of agreement, LoA) and linear random effects models were used for comparison between the software. Results Segmentation using programs 1, 3 and 4 was unsuccessful in 1 (1%), 7 (10%) and 5 (7%) patients, respectively. Program 2 could analyze all datasets. The 53 patients with successful segmentation by all 4 programs were included for further analysis. For LV, program 1 and 4 showed the largest mean difference of 72 ml and the widest LoA of [-356, 499 ml] (p<0.05). Program 3 and 4 showed the largest mean difference of 4% and the widest LoA of [-7, 14%] for EI (p<0.001). Conclusions Only a single software program was able to successfully analyze all scheduled data-sets. Although mean bias of LV and EV were relatively low in lobar quantification, ranges of disagreement were substantial in both of them. For longitudinal emphysema monitoring, not only scanning protocol but also quantification software needs to be kept constant. PMID:27029047
Mirea, Oana; Pagourelias, Efstathios D; Duchenne, Jurgen; Bogaert, Jan; Thomas, James D; Badano, Luigi P; Voigt, Jens-Uwe
2018-01-01
The purpose of this study was to compare the accuracy of vendor-specific and independent strain analysis tools to detect regional myocardial function abnormality in a clinical setting. Speckle tracking echocardiography has been considered a promising tool for the quantitative assessment of regional myocardial function. However, the potential differences among speckle tracking software with regard to their accuracy in identifying regional abnormality has not been studied extensively. Sixty-three subjects (5 healthy volunteers and 58 patients) were examined with 7 different ultrasound machines during 5 days. All patients had experienced a previous myocardial infarction, which was characterized by cardiac magnetic resonance with late gadolinium enhancement. Segmental peak systolic (PS), end-systolic (ES) and post-systolic strain (PSS) measurements were obtained with 6 vendor-specific software tools and 2 independent strain analysis tools. Strain parameters were compared between fully scarred and scar-free segments. Receiver-operating characteristic curves testing the ability of strain parameters and derived indexes to discriminate between these segments were compared among vendors. The average strain values calculated for normal segments ranged from -15.1% to -20.7% for PS, -14.9% to -20.6% for ES, and -16.1% to -21.4% for PSS. Significantly lower values of strain (p < 0.05) were found in segments with transmural scar by all vendors, with values ranging from -7.4% to -11.1% for PS, -7.7% to -10.8% for ES, and -10.5% to -14.3% for PSS. Accuracy in identifying transmural scar ranged from acceptable to excellent (area under the curve 0.74 to 0.83 for PS and ES and 0.70 to 0.78 for PSS). Significant differences were found among vendors (p < 0.05). All vendors had a significantly lower accuracy to detect scars in the basal segments compared with scars in the apex (p < 0.05). The accuracy of identifying regional abnormality differs significantly among vendors. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Jun; Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Agarwal, Prachi; Kuriakose, Jean; Hadjiiski, Lubomir; Patel, Smita; Kazerooni, Ella
2015-03-01
We are developing a computer-aided detection system to assist radiologists in detection of non-calcified plaques (NCPs) in coronary CT angiograms (cCTA). In this study, we performed quantitative analysis of arterial flow properties in each vessel branch and extracted flow information to differentiate the presence and absence of stenosis in a vessel segment. Under rest conditions, blood flow in a single vessel branch was assumed to follow Poiseuille's law. For a uniform pressure distribution, two quantitative flow features, the normalized arterial compliance per unit length (Cu) and the normalized volumetric flow (Q) along the vessel centerline, were calculated based on the parabolic Poiseuille solution. The flow features were evaluated for a two-class classification task to differentiate NCP candidates obtained by prescreening as true NCPs and false positives (FPs) in cCTA. For evaluation, a data set of 83 cCTA scans was retrospectively collected from 83 patient files with IRB approval. A total of 118 NCPs were identified by experienced cardiothoracic radiologists. The correlation between the two flow features was 0.32. The discriminatory ability of the flow features evaluated as the area under the ROC curve (AUC) was 0.65 for Cu and 0.63 for Q in comparison with AUCs of 0.56-0.69 from our previous luminal features. With stepwise LDA feature selection, volumetric flow (Q) was selected in addition to three other luminal features. With FROC analysis, the test results indicated a reduction of the FP rates to 3.14, 1.98, and 1.32 FPs/scan at sensitivities of 90%, 80%, and 70%, respectively. The study indicated that quantitative blood flow analysis has the potential to provide useful features for the detection of NCPs in cCTA.
Graph run-length matrices for histopathological image segmentation.
Tosun, Akif Burak; Gunduz-Demir, Cigdem
2011-03-01
The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.
Brain tumor classification and segmentation using sparse coding and dictionary learning.
Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo
2016-08-01
This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.
Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen
2008-02-01
A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
Moeskops, Pim; de Bresser, Jeroen; Kuijf, Hugo J; Mendrik, Adriënne M; Biessels, Geert Jan; Pluim, Josien P W; Išgum, Ivana
2018-01-01
Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T 1 -weighted image, a T 2 -weighted fluid attenuated inversion recovery (FLAIR) image and a T 1 -weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge ( n = 20), quantitatively and qualitatively in relatively healthy older subjects ( n = 96), and qualitatively in patients from a memory clinic ( n = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts.
Aortic root segmentation in 4D transesophageal echocardiography
NASA Astrophysics Data System (ADS)
Chechani, Shubham; Suresh, Rahul; Patwardhan, Kedar A.
2018-02-01
The Aortic Valve (AV) is an important anatomical structure which lies on the left side of the human heart. The AV regulates the flow of oxygenated blood from the Left Ventricle (LV) to the rest of the body through aorta. Pathologies associated with the AV manifest themselves in structural and functional abnormalities of the valve. Clinical management of pathologies often requires repair, reconstruction or even replacement of the valve through surgical intervention. Assessment of these pathologies as well as determination of specific intervention procedure requires quantitative evaluation of the valvular anatomy. 4D (3D + t) Transesophageal Echocardiography (TEE) is a widely used imaging technique that clinicians use for quantitative assessment of cardiac structures. However, manual quantification of 3D structures is complex, time consuming and suffers from inter-observer variability. Towards this goal, we present a semiautomated approach for segmentation of the aortic root (AR) structure. Our approach requires user-initialized landmarks in two reference frames to provide AR segmentation for full cardiac cycle. We use `coarse-to-fine' B-spline Explicit Active Surface (BEAS) for AR segmentation and Masked Normalized Cross Correlation (NCC) method for AR tracking. Our method results in approximately 0.51 mm average localization error in comparison with ground truth annotation performed by clinical experts on 10 real patient cases (139 3D volumes).